[Haskell-cafe] Re: [Haskell-beginners] folds -- help!
That's why I said for appropriate g and f. But I see that my wording was misleading. However you can say: foldl' oplus alpha bs =(foldr f id bs) alpha where f a g = \alpha - g (alpha `oplus` a) foldr' oplus alpha bs = (foldl f id bs) alpha where f g a = \alpha - g (a `oplus` alpha) And it works as long as oplus is strict in both arguments. Am 10.03.2009 um 21:54 schrieb John Dorsey: Adrian Neumann wrote: Notice that there is no difference between foldr g a foldl f a (for appropriate g and f) if g and f are strict in both arguments. Be careful... as apfelmus noted elsewhere in this thread, that's not (in general) true. Prelude foldr (^) 2 [3,5] 847288609443 Prelude foldl (^) 2 [3,5] 32768 The reason? Integer exponentiation (^) isn't associative and commutative. So the first is (3 ^ (5^2)) = 3^25, while the second is ((2 ^ 3) ^ 5) = 2^15. Cheers, John ___ Beginners mailing list beginn...@haskell.org http://www.haskell.org/mailman/listinfo/beginners PGP.sig Description: Signierter Teil der Nachricht ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] State monad is missing Applicative instance
I was using the transformers but still had to implement the Applicative instance of State This package contains an applicative instance for StateT but not for State On Sat, Mar 14, 2009 at 3:05 AM, Henning Thielemann lemm...@henning-thielemann.de wrote: On Thu, 12 Mar 2009, Peter Verswyvelen wrote: I think. Or is it defined in some other package? The 'transformers' package has those instances. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
On Sat, Mar 14, 2009 at 2:36 AM, Henning Thielemann lemm...@henning-thielemann.de wrote: a lot. However, isn't this halfheartedly since we all wait for full dependent types? :-) Well, in C++ one can already use the numerical values with templates for achieving a lot of compile time computations. So I would be very happy to have this feature in Haskell. It might also be good research towards full dependent types no? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: Vintage BASIC 1.0
I've posted a new version, 1.0.1, that has a usage message if you run it with no arguments. There is a package for Windows now (not just a binary) as well as Linux. Both packages include documentation and example programs. Thanks for pointing that out! On Fri, Mar 13, 2009 at 1:31 AM, Wolfgang Jeltsch jelt...@informatik.tu-cottbus.de wrote: When running the executable, nothing happens. The executable should show a usage message instead. Where are the BASIC programs? I think, you have to grab them from the source. They should be installed, too. Best wishes, Wolfgang ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Problem with cabal-install where package requires* mutually exclusive versions of another package
I am trying install a package using cabal-install however the package requies an older version of QuickCheck and one of the required packages requires the latest version: $ cabal fetch Reactive Resolving dependencies... cabal.exe: cannot configure Stream-0.3.1. It requires QuickCheck =2.0 For the dependency on QuickCheck =2.0 there are these packages: QuickCheck-2.1 and QuickCheck-2.1.0.1. However none of them are available. QuickCheck-2.1 was excluded because checkers-0.1.3 requires QuickCheck 2.0 QuickCheck-2.1 was excluded because reactive-0.10.5 requires QuickCheck 2.0 QuickCheck-2.1.0.1 was excluded because checkers-0.1.3 requires QuickCheck 2.0 QuickCheck-2.1.0.1 was excluded because reactive-0.10.5 requires QuickCheck 2.0 How can I get around this? I could work around this and install the packages individually. Can I have two versions of a package installed (ie QuickCheck) and will everything get resolved correctly? Mark ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] How to catch error in array index when debugging
I'm getting a runtime failure Error in array index. This causes ghci to exit. Is there a way to get it to break instead, so I can find out which function is failing? -- Colin Adams Preston Lancashire ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Vim support for cabal
Great ! I'm just starting with Vim so I'm not sure if I could contribute. But I definitely will look into the sources. Thanks! Pavel On 13.03.2009, at 21:44, andy morris wrote: 2009/3/13 Pavel Perikov peri...@gmail.com: Hi café ! I googled for some kind of vim support for cabal but found nothing. I mean syntax highlighting of .cabal and probably integration with haskellmode. Did anyone hear about such thing? Pavel I've been wanting something like this as well, so you inspired me to finally get round to writing it. :D I've stuck it on Patch-tag, so: darcs get http://patch-tag.com/publicrepos/cabal-vim Nothing special at the moment; it just highlights field names and does some basic autoindenting. If you (or someone else on the list) have any suggestions, then just say (or send a patch if you want :) ) I might add to it over the weekend as well (highlighting of compiler names in the 'tested-on' field, that sort of thing), so perhaps check back in a few days. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] broken IO support in uvector package, when using non primitive types
I have added UAProd-based Binary instances to my copy of the uvector repo at http://patch-tag.com/publicrepos/pumpkin-uvector . I have some extensive tests (added to the existing tests directory) of things in there and they seem to be mostly sane. The Binary code isn't at all pretty right now, and the UnsafeIO (which I'm thinking of renaming to UnsafeBinary, thoughts?) that comes with it is even uglier, but it'll be improving soon. When I clean it up, I'll submit a patch to the main uvector repo and we can see how it goes (maybe at that point we can pull out UIO). Also in that repo are changes that add safety checks to all the functions I know to be unsafe. Due to the fact that these functions would previously crash my tests, I'm currently only testing safe parameter combinations on them, so when I get the tests to check that errors are being thrown appropriately, I'll also submit a patch for that. Cheers, Dan On Fri, Mar 13, 2009 at 7:34 PM, Don Stewart d...@galois.com wrote: I'd like to do away with UIO altogether though. pumpkingod: The main issue seems to be that although the semantics of UIO may be arbitrary, Wallace's patch actually broke deserialization for any production-based UArr, and I'm not sure the benefits are worthwhile (loading a file someone else sent you) given that endianness is already not taken into account when loading (so the chances of someone giving you a raw binary file that happens to contain values of the correct endianness is rather low, it seems). On a related note, having a (more) complete testsuite might help prevent patches like this from breaking existing functionality. Maybe a way to resolve the issue is to use the file size and break it according to the size of the two halves of the production? This avoids prefixing array length, but allows the productions to still work. Either way, I'm in the process of writing a Binary instance, so maybe we can get the best of both worlds eventually. Thanks, Dan On Fri, Mar 13, 2009 at 7:21 PM, Don Stewart d...@galois.com wrote: manlio_perillo: Don Stewart ha scritto: [...] So, the patch is: just revert this change. Or... use your own UIO instance. That's why it's a type class! Why should I rewrite the UIO instance, if one already exists? Oh, because you want different serialization semantics to the (arbitrary) defaults. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Style Guide for Haskell Code
Henning Thielemann wrote: Of course, style is a matter of taste. So there are as many good styles as programmers and there won't be an official style guide, I'm afraid. While that is true, it's no valid reason to not have a style guide. Sun's style guide for Java has been very successful, improving legibility of Java code in general. Noone has to adhere to it, but most people do anyway. Martijn. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell-Wiki Account registration
Benjamin L.Russell wrote: Why not ask new users to identify letters in a random bitmapped image of a string, as is commonly done? Then any new user who still registers and starts submitting spam can be tracked and moderated. If this doesn't work out we can always use hackage's approach: have someone answer emails requesting new accounts. Martijn. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to catch error in array index when debugging
Hello Colin, Saturday, March 14, 2009, 11:39:41 AM, you wrote: I'm getting a runtime failure Error in array index. This causes ghci to exit. Is there a way to get it to break instead, so I can find out which function is failing? i recall two techniques - one is trivially define your own (!) and print index at least. another is to use ghc profiling with -xc RTS option -- Best regards, Bulatmailto:bulat.zigans...@gmail.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Haskell-Wiki Account registration
Henning Thielemann wrote: How long will the Wiki account registration be disabled? Would it be possible to ask a question, that real Haskellers could easily answer, but a spambot cannot? E.g. What's Haskell's surname? It will be re-enabled when an appropriate extension to MediaWiki is installed. An appropriate extension will be installed when MediaWiki is upgraded to a version that supports that. MediaWiki will be upgraded when PHP and MySQL are upgraded. MySQL cannot easily be upgraded on the existing distribution (RHEL AS 3 update 9 with Linux 2.4.21), as various other packages depend on the current version. MySQL will be upgraded when we have a more up-to-date distribution (for instance, Debian 4.0). http://haskell.org/pipermail/haskell/2009-January/020916.html We will have a more up-to-date distribution when a new machine takes over from the existing machine at Yale. I don't know when anyone will have a new machine. This is an overview of which machine does what: http://haskell.org/haskellwiki/Haskell.org_domain -- Ashley Yakeley ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to catch error in array index when debugging
You can use the ghci debugger http://www.haskell.org/ghc/docs/latest/html/users_guide/ghci- debugger.html it can set breakpoints on exceptions. Am 14.03.2009 um 09:39 schrieb Colin Paul Adams: I'm getting a runtime failure Error in array index. This causes ghci to exit. Is there a way to get it to break instead, so I can find out which function is failing? -- Colin Adams Preston Lancashire ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe PGP.sig Description: Signierter Teil der Nachricht ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] darcs fundraising drive - only $720 left to go!
Hi everybody, Just a quick update on the fundraising drive. We are trying to raise $1000 to help pay for travel to the upcoming Haskell hackathon (the second of our biannual darcs hacking sprints). So far we have raised $280, so we're almost a third of the way there. Think you can help? Details on this page http://www.darcs.net/donations.html Many thanks! Eric PS. Ultimately, what we are trying to buy is *time*. So if you have a few spare moments in your week, the most generous donation you can make is by offering us a little of your time. Come join us in Utrecht, 17-19 April or on #darcs! Otherwise, we accept Paypal :-) -- Eric Kow http://www.nltg.brighton.ac.uk/home/Eric.Kow PGP Key ID: 08AC04F9 pgpYXMhGigKZ2.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Hashing over equivalence classes
Are there some known ways to define hashing (or any other) functions over equivalence classes? I.e. a ~ b = hash(a) == hash(b) where (~) is some equivalence relation. For example, you might want to hash lambda terms modulo alpha-equivalence or hash logical terms with respect to commutativity of () and (||). Often we can choose 'canonical' element from each class and hash it. But (at least, in theory) it's not necessary. So, are there (practical) ways to define hash function without it? -- Roman I. Cheplyaka :: http://ro-che.info/ Don't let school get in the way of your education. - Mark Twain ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to catch error in array index when debugging
I'm getting a runtime failure Error in array index. This causes ghci to exit. Is there a way to get it to break instead, so I can find out which function is failing? i recall two techniques - one is trivially define your own (!) and print index at least. another is to use ghc profiling with -xc RTS option None of which is satisfactory. You might also want to add yourself to this ticket: index out of range error message regression http://hackage.haskell.org/trac/ghc/ticket/2669 Claus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] broken IO support in uvector package, when using non primitive types
Daniel Peebles ha scritto: I have added UAProd-based Binary instances to my copy of the uvector repo at http://patch-tag.com/publicrepos/pumpkin-uvector . I have some extensive tests (added to the existing tests directory) of things in there and they seem to be mostly sane. Thanks for the work. I have a question, however. Isn't it possible to write binary serialization code for the generic Stream type? Manlio ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
Hi Henning On 14 Mar 2009, at 01:36, Henning Thielemann wrote: On Tue, 10 Mar 2009, Conor McBride wrote: Apologies for crossposting. Please forward this message to individuals or lists who may be interested. In addition to the recently advertised PhD position at Strathclyde on Reusability and Dependent Types, I am delighted to advertise the following PhD opportunity. {- -- Haskell Types with Numeric Constraints -} Sounds like it could simplify http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ dimensional/ Hope so. a lot. However, isn't this halfheartedly since we all wait for full dependent types? :-) Rome wasn't burnt in a day. Of course I want more than just numerical indexing (and I even have a plan) but numeric constraints are so useful and have so much of their own peculiar structure that they're worth studying in their own right, even for languages which do have full dependent types, let alone Haskell. We'll carry out this project with an eye to the future, to make sure it's compatible with full dependent types. Be assured (excited, nervous, etc...) that this is not halfhearted: it's a wholehearted start. All the best Conor ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
Am Samstag, 14. März 2009 08:19 schrieb Peter Verswyvelen: Well, in C++ one can already use the numerical values with templates for achieving a lot of compile time computations. So I would be very happy to have this feature in Haskell. It might also be good research towards full dependent types no? I doubt that it will be a good thing to include full dependent types into a language with partial functions like Haskell. Conor, is Epigram currently under development? Best wishes, Wolfgang ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Alternative to Data.Binary
Hi all, Is there a serialization library other than the Data.Binary from hackage? I am using Data.Binary in a couple of projects, but I have found its stack and memory usage very hard to control. Its very common that decoding a map or list of non-trivial size uses up all available RAM, or causes a stack overflow. I don't care that much about it being extremely fast, I just want to stop worrying that if I try to read a file a few percent larger than the last time, my program will suddenly stop working. Best, -- Grzegorz -- View this message in context: http://www.nabble.com/Alternative-to-Data.Binary-tp22512229p22512229.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Haskell-Wiki Account registration
Henning Thielemann lemm...@henning-thielemann.de wrote: On Fri, 13 Mar 2009, Benjamin L.Russell wrote: Why not ask new users to identify letters in a random bitmapped image of a string, as is commonly done? I assume, because those images are 1) not accessible by blind people 2) can be decoded by spammers, since they know how the images are generated by common software. Thus my suggestion was a simple Haskell specific question, which cannot be answered by stupid spambots. http://recaptcha.net is believed to be spam-proof, and there's good reasons to believe so, see http://recaptcha.net/security.html : It starts off with text that can't be OCR'd, in the first place. It also features an audio mode for accessibility. Question-based captchas provide security by rarity. You can be sure that if a spammer really, really wants to spam on the wiki, it won't take long before a program is written that memoises all questions and answers. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Alternative to Data.Binary
Grzegorz Chrupala wrote: Hi all, Is there a serialization library other than the Data.Binary from hackage? Yes. binary-strict is one alternative: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/binary-strict I am using Data.Binary in a couple of projects, but I have found its stack and memory usage very hard to control. Its very common that decoding a map or list of non-trivial size uses up all available RAM, or causes a stack overflow. I don't care that much about it being extremely fast, I just want to stop worrying that if I try to read a file a few percent larger than the last time, my program will suddenly stop working. Best, -- Grzegorz ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Alternative to Data.Binary
On Sat, Mar 14, 2009 at 1:37 PM, Grzegorz Chrupala grzegorz.chrup...@computing.dcu.ie wrote: Hi all, Is there a serialization library other than the Data.Binary from hackage? I am using Data.Binary in a couple of projects, but I have found its stack and memory usage very hard to control. Its very common that decoding a map or list of non-trivial size uses up all available RAM, or causes a stack overflow. That little problem appears to be an artifact of the particular Binary instance for lists (the map instance starts by converting to/from a list), and should be fixable in principle, for example by writing them in chunks of 256 elements. That said, the current instance really should only have problems when *writing*, in that it'll force the entire list to figure out its length before it starts writing any of it. That this is not, in fact, the case suggests that the reader should be fixable without changing its on-disk format. -- Svein Ove Aas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Alternative to Data.Binary
Svein Ove Aas ha scritto: On Sat, Mar 14, 2009 at 1:37 PM, Grzegorz Chrupala grzegorz.chrup...@computing.dcu.ie wrote: Hi all, Is there a serialization library other than the Data.Binary from hackage? I am using Data.Binary in a couple of projects, but I have found its stack and memory usage very hard to control. Its very common that decoding a map or list of non-trivial size uses up all available RAM, or causes a stack overflow. That little problem appears to be an artifact of the particular Binary instance for lists (the map instance starts by converting to/from a list), and should be fixable in principle, for example by writing them in chunks of 256 elements. I can confirm that reading serialized UArr from uvector package does not cause memory problems. Manlio ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
Hi Wolfgang On 14 Mar 2009, at 12:00, Wolfgang Jeltsch wrote: Am Samstag, 14. März 2009 08:19 schrieb Peter Verswyvelen: Well, in C++ one can already use the numerical values with templates for achieving a lot of compile time computations. So I would be very happy to have this feature in Haskell. It might also be good research towards full dependent types no? I doubt that it will be a good thing to include full dependent types into a language with partial functions like Haskell. I think we could manage quite a bit, though. It seems reasonable to allow types to contain expressions drawn from a total fragment of the value language. Even a tiny fragment (such the expressions built only from fully applied constructors and variables) would be a useful start (vector head, tail, zip, but not vector append). The present capacity for open type functions suggests that it shouldn't be too violent to add some total value-functions (probably with a flag allowing self-certification of totality). We should also ask what the problem is with partiality? I'd far rather have a simplistic Set-based intuition about what values in types might mean, rather than worrying about vectors of length bottom. The other side of that coin is that it makes perfect sense to have partial *computations* in types as long as they're somehow distinguished from total values, and as long as the system remembers not to *run* them until run-time. My point: it isn't all or nothing -- there's a slider here, and we can shift it a bit at a time. Conor, is Epigram currently under development? We've even stopped working on the engine and started working on the chassis. I'm in an intensive teaching block until the end of April, but from May it becomes Priority. The Reusability and Dependent Types project studentship will hopefully bring an extra pair of hands, come October. Epigram exists to be stolen from, so I'll be trying to encourage a literate programming style and plenty of blogging. We've spent a lot of time figuring out how to make the system much more open and modular, so it will hopefully prove easier for people to contribute to as well as to burgle. The worst thing about Epigram 1 was just how monolithic and impenetrable the code base became. It was a valuable learning experience but no basis for further progress. This time, we optimize for clarity. I don't see any conflict -- indeed I see considerable synergy -- in working simultaneously on the experimental frontier of dependent type systems and on the pragmatic delivery of their basic benefits via a much more established language like Haskell. Indeed, I think we'll learn more readily about the engineering realities of developing applications with dependent types by doing plenty of the latter. I don't think functional programmers should wait for the next generation of languages to mature before picking up this particular habit... All the best Conor ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell Weekly News: Issue 109 - March 14, 2009
--- Haskell Weekly News http://sequence.complete.org/hwn/20090314 Issue 109 - March 14, 2009 --- Welcome to issue 109 of HWN, a newsletter covering developments in the [1]Haskell community. Congratulations to the authors of RWH on their [2]Jolt award! Some cool libraries released this week (as usual), and some really cool PhD [3]opportunities at Strathclyde. Also, it seems that I was censured last week for not including any quotes in the HWN, which is because tunes.org (which hosts the #haskell logs) was down while I was putting it together. So, this time I've included quotes going back two weeks, I HOPE YOU'RE HAPPY Community News Tom DuBuisson (TomMD) has [4]moved to Portland and will be starting a PhD at Portland State soon. Announcements darcs fundraising drive - only $720 left to go!. Eric Kow [5]announced that [6]donations are still being accepted to help pay for travel to the upcoming Haskell hackathon. So far we have raised $280, so we're almost a third of the way there. Think you can help? Vintage BASIC 1.0. Lyle Kopnicky [7]announced the initial release of [8]Vintage BASIC, an interpreter for microcomputer-era BASIC. Fully unit-tested, it faithfully implements the common elements of the language. On the web site, you can find 102 games from the classic book BASIC Computer Games, all of which run flawlessly. Have fun! ThreadScope: Request for features for the performance tuning of parallel and concurrent Haskell programs. Satnam Singh [9]requested feedback on infrastructure for logging run-time events and a graphical viewer program called [10]ThreadScope. The goal is for these features to make it into the next release of GHC. torch-0.1. Yusaku Hashimoto [11]announced a new unit test framework, [12]torch-0.1. sparsebit 0.5 - Sparse Bitmaps for Pattern Match Coverage. Ki Yung Ahn [13]announced the release of the [14]sparsebit library. This library packages the functional peal paper [15]Sparse Bitmaps for Pattern Match Coverage submitted to ICFP 2009 by Ki Yung Ahn and Tim Sheard. happs-tutorial 0.8. Crieghton Hogg [16]announced the release of [17]happs-tutorial 0.8, which is compatible with [18]happstack-0.2. A number of changes have occurred in this release, including general code cleanup, migration to the new Happstack.Server.SimpleHTTP API, and more. Future 1.1.0 concurrency library. ChrisK [19]announced the [20]future package, which ought to do what C++ standard futures/promises do, plus a bit more. The main operation is forkPromise :: IO a - IO (Promise a), which sets the IO a operation running in a fresh thread; the eventual result can be accessed in many ways (non-blocking, blocking, blocking with timeout). Holumbus-MapReduce 0.0.1. Stefan Schmidt [21]announced three new libraries: [22]Holumbus-MapReduce, [23]Holumbus-Distribution, and [24]Holumbus-Storage, which provide tools for building distributed systems. These libraries are used as the backbone of the [25]Holumbus search engine. Turbinado V0.6. Alson Kemp [26]announced the release of [27]Turbinado 0.6, a Rails-ish Model-View-Controller web serving framework for Haskell. New features include support for CGI serving, statically compiled Layouts, Views, and Controllers, lower case paths, support for cookies and encrypted cookie sessions, easier installation, and support for GHC 6.10. iteratee-0.1.0. John Lato [28]announced the hackage release of [29]iteratee-0.1.0. This library implements enumerators and iteratees [30]as proposed by Oleg Kiselyov. Harpy 0.4.4 - Runtime code generation for x86 machine code. Dirk Kleeblatt [31]announced the release of [32]Harpy 0.4.1, a library for runtime code generation for x86 machine code. The new release features additional Eq instances, support for new prefetching instructions, and some bug fixes. Discussion Suggestion for a Haskell mascot. Maur�cio [33]suggested using a sloth as the Haskell mascot. If you would like to know how to say 'sloth' in just about every language ever, read this thread. Jobs Microsoft PhD Scholarship at Strathclyde. Conor McBride [34]announced another PhD opportunity at Strathclyde, sponsored by Microsoft Research, to investigate the practical and theoretical impact of extending Haskell's type system with numeric expressions (representing sizes, or ranges, or costs, for example) and constraints capturing richer safety properties than are currently managed by static typing. Blog noise [35]Haskell news from the [36]blogosphere. * Braden Shepherdson: [37]Pimp Your XMonad #4: Urgency Hooks. * Thomas M. DuBuisson: [38]Explicit Parallelism via Thread Pools. * Ketil Malde: [39]Current
Re: [Haskell-cafe] Re: [Haskell-beginners] folds -- help!
Adrian, That's why I said for appropriate g and f. But I see that my wording was misleading. Thanks for following up! I had thought you were arguing that foldl and foldr were easily and intuitively interchangeable; they're surely not so for beginners. Now I think you're arguing that given strict oplus, each can be generally expressed in terms of the other, allowing for different space/time complexity. Thanks for clarifying. I should have paid more attention to your appropriate g and f qualifier. Cheers, John ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] examples for error handling in takusen?
Hi, can someone please point me to error handling examples with takusen? I try to run a piece of code with takusen but just get the very sparse Database.InternalEnumerator.DBException ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
On Saturday 14 March 2009 8:12:09 am Conor McBride wrote: Rome wasn't burnt in a day. Of course I want more than just numerical indexing (and I even have a plan) but numeric constraints are so useful and have so much of their own peculiar structure that they're worth studying in their own right, even for languages which do have full dependent types, let alone Haskell. We'll carry out this project with an eye to the future, to make sure it's compatible with full dependent types. One should note that ATS, which has recently been generating some excitement, doesn't have full dependent types, depending on what exactly you mean by that. For instance, it's dependent typing for integers consist of: 1) A simply typed static/type-level integer type 2) A family of singleton types int(n) parameterized by the static type. For instance, int(5) is the type that contains only the run-time value 5. 3) An existential around the above family for representing arbitrary integers. where, presumably, inspecting a value of the singleton family informs the type system in some way. But, we can already do this in GHC (I'll do naturals): -- datakind nat = Z | S nat data Z data S n -- family linking static and dynamic data NatFam :: * - * where Z :: NatFam Z S :: NatFam n - NatFam (S n) -- existential wrapper data Nat where N :: forall n. NatFam n - Nat Of course, ATS' are built-in, and faster, and the type-level programming is probably easier, but as far as dependent typing goes, GHC is already close (I don't think you'd ever see the above scheme in something like Agda or Coq with 'real' dependent types). And this isn't just limited to integers. If you go look at the quicksort example, you'll see something that when translated to GHC looks like: -- datakind natlist = nil | cons nat natlist data Nil data n ::: l data ListFam :: * - * where Nil :: ListFam Nil (:::) :: forall n l. NatFam n - ListFam l - ListFam (n ::: l) data List where L :: forall l. ListFam l - List So this sort of type-level vs. value-level duplication with GADTs tying the two together seems to be what is always done in ATS. This may not be as sexy as taking the plunge all the way into dependent typing ala Agda and Coq, but it might be a practical point in the design space that GHC/Haskell-of-the- future could move toward without too much shaking up. And if ATS is any indication, it allows for very efficient code, to boot. :) Cheers, -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to catch error in array index when debugging
Claus == Claus Reinke claus.rei...@talk21.com writes: Claus None of which is satisfactory. You might also want to add Claus yourself to this ticket: Clausindex out of range error message regression Claus http://hackage.haskell.org/trac/ghc/ticket/2669 How do I do that? -- Colin Adams Preston Lancashire ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to catch error in array index when debugging
Adrian == Adrian Neumann aneum...@inf.fu-berlin.de writes: Adrian You can use the ghci debugger http://www.haskell.org/ghc/docs/latest/html/users_guide/ghci- Adrian debugger.html Adrian it can set breakpoints on exceptions. So i tried adding the -fbreak-on-error flag. It made no difference - it still exited: interactive: Error in array index interactive: interrupted interactive: warning: too many hs_exit()s Adrian Am 14.03.2009 um 09:39 schrieb Colin Paul Adams: I'm getting a runtime failure Error in array index. This causes ghci to exit. Is there a way to get it to break instead, so I can find out which function is failing? -- Colin Adams Preston Lancashire -- Colin Adams Preston Lancashire ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Uploading files to the wiki
I can't manage to upload files to the Haskell wiki. I've tried different browsers, different internet connections, different machines, different operating systems, and different user accounts - all without success. Is this a new anti-spam measure? This is slightly annoying. I was looking to release the next Monad.Reader on the wiki. Thanks for any advice, Wouter ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
Am Samstag, 14. März 2009 14:51 schrieb Conor McBride: Conor, is Epigram currently under development? We've even stopped working on the engine and started working on the chassis. I'm in an intensive teaching block until the end of April, but from May it becomes Priority. The Reusability and Dependent Types project studentship will hopefully bring an extra pair of hands, come October. This sounds good! I don't see any conflict -- indeed I see considerable synergy -- in working simultaneously on the experimental frontier of dependent type systems and on the pragmatic delivery of their basic benefits via a much more established language like Haskell. Indeed, I think we'll learn more readily about the engineering realities of developing applications with dependent types by doing plenty of the latter. This makes sense indeed. Best wishes, Wolfgang ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Problem with cabal-install where package requires* mutually exclusive versions of another package
On Sat, 2009-03-14 at 19:10 +1100, Mark Wassell wrote: I am trying install a package using cabal-install however the package requies an older version of QuickCheck and one of the required packages requires the latest version: $ cabal fetch Reactive Resolving dependencies... cabal.exe: cannot configure Stream-0.3.1. It requires QuickCheck =2.0 For the dependency on QuickCheck =2.0 there are these packages: QuickCheck-2.1 and QuickCheck-2.1.0.1. However none of them are available. QuickCheck-2.1 was excluded because checkers-0.1.3 requires QuickCheck 2.0 QuickCheck-2.1 was excluded because reactive-0.10.5 requires QuickCheck 2.0 QuickCheck-2.1.0.1 was excluded because checkers-0.1.3 requires QuickCheck 2.0 QuickCheck-2.1.0.1 was excluded because reactive-0.10.5 requires QuickCheck 2.0 The cabal-install solver does not have enough information to know that it will not break to use multiple versions of QuickCheck so it's looking for a solution involving only one version. It cannot find one of course. A medium term solution here should involve letting packages specify that some of their dependencies are private, ie nothing is re-exported and thus there is no danger of clashes. How can I get around this? I could work around this and install the packages individually. Yes. You'll just have to avoid using cabal install because that invokes the solver. You can either go back to the runghc Setup method or use the (undocumented) cabal install --only flag. Can I have two versions of a package installed (ie QuickCheck) and will everything get resolved correctly? Should do. It will warn you that it's highly likely to break, but if you're confident that it will not then press on. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] bytestring vs. uvector
On Tue, 2009-03-10 at 23:55 +0300, Bulat Ziganshin wrote: starting with 6.6, ForeignArray access is no-op, so we can just use obvious Ptr operations (via Storable class) to get unboxed arrays fast access. so, no more need for those special ByteArray# access operations but Array library still old, so any effort based on its spources, got the same restrictions also, ByteArray# may be unpinned, but afaik, this isn't really important - it can be coerced to Ptr for the period of one operation Actually the fact that they may be unpinned is really important for small allocations like short strings. Pinned allocations are slow and lead to heap fragmentation. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] bytestring vs. uvector
On Fri, 2009-03-13 at 20:30 +0300, Bulat Ziganshin wrote: Hello Don, Friday, March 13, 2009, 8:08:57 PM, you wrote: What is the reason why you have decided to use unpinned arrays (ByteArray#) instead of pinned arrays (Foreign.Ptr)? They prevent heap fragmentation (and in general are faster). you probably mean faster alloc/gc operations, everything else should be the same Right. Access times are the same. Both are just pointers internally. It is just the allocation time, GC time and extra memory use and lower cache utilisation caused by heap fragmentation. For big arrays it doesn't make much difference. Big ByteArray# allocations get pinned anyway. For small ones, like strings I expect the difference is much more noticeable, though I have not measured it. Using ByteArray# also means we can use ST/runST rather than IO/unsafePerformIO. In general we should prefer heap allocated byte arrays and ST unless we really really want to always use pinned allocations to interact with C libs easily. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] bytestring vs. uvector
On Mon, 2009-03-09 at 18:29 -0700, Alexander Dunlap wrote: Thanks for all of the responses! So let me see if my summary is accurate here: - ByteString is for just that: strings of bytes, generally read off of a disk. The Char8 version just interprets the Word8s as Chars but doesn't do anything special with that. Right. So it's only suitable for binary or ASCII (or mixed) formats. - Data.Text/text library is a higher-level library that deals with text, abstracting over Unicode details and treating each element as a potentially-multibye character. If you're writing about this on the wiki for people, it's best not to confuse the issue by talking about multibyte anything. We do not describe [Char] as a multibyte encoding of Unicode. We say it is a Unicode string. The abstraction is at the level of Unicode code points. The String type *is* a sequence of Unicode code points. This is exactly the same for Data.Text. It is a sequence of Unicode code points. It is not an encoding. It is not UTF-anything. It does not abstract over Unicode. The Text type is just like the String type but with different strictness and performance characteristics. Both are just sequences of Unicode code points. There is a reasonably close correspondence between Unicode code points and what people normally think of as characters. - utf8-string is a wrapper over ByteString that interprets the bytes in the bytestring as potentially-multibye unicode characters. This on the other hand is an encoding. ByteString is a sequence of bytes and when we interpret that as UTF-8 then we are looking at an encoding of a sequence of Unicode code points. Clear as mud? :-) Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Problem with cabal-install where package requires* mutually exclusive versions of another package
On Sat, Mar 14, 2009 at 09:40, Duncan Coutts duncan.cou...@worc.ox.ac.uk wrote: A medium term solution here should involve letting packages specify that some of their dependencies are private, ie nothing is re-exported and thus there is no danger of clashes. Is the long term solution to automatically detect what is exported? It seems feasible, since I think that is a totally static property, no? Denis ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Problem with cabal-install where package requires* mutually exclusive versions of another package
On Sat, 2009-03-14 at 10:02 -0600, Denis Bueno wrote: On Sat, Mar 14, 2009 at 09:40, Duncan Coutts duncan.cou...@worc.ox.ac.uk wrote: A medium term solution here should involve letting packages specify that some of their dependencies are private, ie nothing is re-exported and thus there is no danger of clashes. Is the long term solution to automatically detect what is exported? It seems feasible, since I think that is a totally static property, no? Right I think the long term solution is something that involves identifying packages as functors/signatures and doing subtyping and unification to check if packages can be composed. Duncan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] broken IO support in uvector package, when using non primitive types
Daniel Peebles ha scritto: I have added UAProd-based Binary instances to my copy of the uvector repo at http://patch-tag.com/publicrepos/pumpkin-uvector . I can confirm that it works for me. However I have now a memory problem with data decoding. I need to serialize the Netflix Prize training dataset. When I parse the data from original data set, memory usage is about 640 MB [1]. But when I load the data serialized and compressed, (as [UArr (Word32 *:* Word8)]), memory usage is about 840 MB... The culprit is probably the decoding of the list (17770 elements). [1] I have written a script in Python that parse the data, and it only takes 491 MB (using a list of a tuple with two compact arrays from numpy). So, GHC has memory problems here. Thanks Manlio Perillo ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Alternative to Data.Binary
grzegorz.chrupala: Hi all, Is there a serialization library other than the Data.Binary from hackage? I am using Data.Binary in a couple of projects, but I have found its stack and memory usage very hard to control. Its very common that decoding a map or list of non-trivial size uses up all available RAM, or causes a stack overflow. I don't care that much about it being extremely fast, I just want to stop worrying that if I try to read a file a few percent larger than the last time, my program will suddenly stop working. Have you tried the latest release, which modified the Map and [a] instances? -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell-Wiki Account registration
On 14 Mar 2009, at 10:20, Ashley Yakeley wrote: We will have a more up-to-date distribution when a new machine takes over from the existing machine at Yale. I don't know when anyone will have a new machine. The contract with Yale for running the haskell.org machine is due for its yearly renewal in the early summer (around July I think). Various people have already mooted the possibility of re-locating all of the services provided on that machine, by purchasing hosting elsewhere. So we should be aiming to plan which version of which software to install on the new machine, and an orderly transfer of content in that sort of timeframe. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
Hi Dan On 14 Mar 2009, at 14:26, Dan Doel wrote: On Saturday 14 March 2009 8:12:09 am Conor McBride wrote: Rome wasn't burnt in a day. Of course I want more than just numerical indexing (and I even have a plan) but numeric constraints are so useful and have so much of their own peculiar structure that they're worth studying in their own right, even for languages which do have full dependent types, let alone Haskell. We'll carry out this project with an eye to the future, to make sure it's compatible with full dependent types. One should note that ATS, which has recently been generating some excitement, doesn't have full dependent types, depending on what exactly you mean by that. I'm really impressed by the results ATS is getting, and by DML before it. I think these systems do a great job of showing what can be gained in performance by improving precision. For instance, it's dependent typing for integers consist of: I certainly want 1) A simply typed static/type-level integer type which looks exactly like the value-level integers and has a helpful but not exhaustive selection of the same operations. But this... 2) A family of singleton types int(n) parameterized by the static type. For instance, int(5) is the type that contains only the run-time value 5. 3) An existential around the above family for representing arbitrary integers. ...I like less. where, presumably, inspecting a value of the singleton family informs the type system in some way. But, we can already do this in GHC (I'll do naturals): -- datakind nat = Z | S nat data Z data S n -- family linking static and dynamic data NatFam :: * - * where Z :: NatFam Z S :: NatFam n - NatFam (S n) -- existential wrapper data Nat where N :: forall n. NatFam n - Nat Of course, ATS' are built-in, and faster, and the type-level programming is probably easier, but as far as dependent typing goes, GHC is already close (I don't think you'd ever see the above scheme in something like Agda or Coq with 'real' dependent types). Which is why I'd rather not have to do that in Haskell either. It really hurts to go through this rigmarole to make this weird version of Nat. I'd much rather figure out how to re-use the existing datatype Nat. Also, where the GADT coding really doesn't compete with ATS is in dealing with constraints on indices that go beyond unification -- numbers have more structure and deserve special attention. Numerical indexing systems need to carry a big stick for boring algebra if we're to gain the power but keep the weight down. But wherever possible, I'd prefer to avoid doing things an awkward way, just because we don't quite have dependent types. If the above kludge is really necessary, then it should at least be machine- generated behind the scenes from ordinary Nat. I'd much rather be able to lift a type to a kind than have a separate datakind feature. Apart from anything else, when you get around to indexed kinds, what you gonna index /them/ with? Long term, I'd far rather think about how to have some sort of dependent functions and tuples than muddle along with singletons and weak existentials. So this sort of type-level vs. value-level duplication with GADTs tying the two together seems to be what is always done in ATS. This may not be as sexy as taking the plunge all the way into dependent typing ala Agda and Coq, but it might be a practical point in the design space that GHC/Haskell- of-the- future could move toward without too much shaking up. And if ATS is any indication, it allows for very efficient code, to boot. :) I'd certainly agree that ATS demonstrates the benefits of moving in this direction, but I think we can go further than you suggest, avoiding dead ends in the design space, and still without too much shaking up. I don't think the duplicate-or-plunge dilemma you pose exhausts the options. At least, I seem no reason to presume so and I look forward to finding out! To be honest, I think the real challenge is to develop good libraries and methodologies for working with indexed types (and particularly, indexed notions of computation: what's the indexed mtl?). There are lots of promising ideas around, but it's hard to build something that scales whilst the encodings are so clunky. A bit of language design, even just to package existing functionality more cleanly, would really kick the door open. And yes, I am writing a research proposal. All the best Conor ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Alternative to Data.Binary
Don Stewart-2 wrote: Have you tried the latest release, which modified the Map and [a] instances? No, I'm working with 0.5. I'll give the new version a try. Thanks! -- Grzegorz -- View this message in context: http://www.nabble.com/Alternative-to-Data.Binary-tp22512229p22514771.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] broken IO support in uvector package, when using non primitive types
The main issue seems to be that although the semantics of UIO may be arbitrary, Wallace's patch actually broke deserialization for any production-based UArr, and I'm not sure the benefits are worthwhile (loading a file someone else sent you) given that endianness is already not taken into account when loading (so the chances of someone giving you a raw binary file that happens to contain values of the correct endianness is rather low, it seems). In my experience, having written several libraries in Haskell for serialisation and deserialisation, it is highly problematic when a library writer decides that all data to be stored began its life in Haskell, and is only being serialised in order to be read back in again by the same Haskell library. I have already made that mistake myself in two different libraries now, eventually regretting it (and fixing it). The real utility of serialisation is when it is possible to read data from any arbitrary external source, and to write data according to external standards. A library that can only read and write data in its own idiosyncratic format is not production-ready at all. This is why I submitted the patch that enables the uvector library to read raw binary data that was not produced by itself. I had 300Gb of data from an external source that I needed to deal with efficiently, and uvector was the ideal candidate apart from this small design flaw. And yes, my code also had to deal with endianness conversion on this data. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Using a monad to decompose a function into functions
On Friday 13 March 2009, Cristiano Paris wrote: 2009/3/13 Marcin Kosiba marcin.kos...@gmail.com: Hi, I've already checked those out. I tried using your yield implementation and while it works, I couldn't get it to work with the state monad. So while: data RecPair a b = Nil | RP (b, a - RecPair a b) yield x = Cont $ \k - RP (x, k) got me half-way to my goal, I couldn't figure out how to make something like: yield' = do state - get state' - yield state put state' Basically, the yield is built upon the Cont monad which has a transformer counter part, ContT. You could try and re-implement the yield under ContT instead of just Cont then you can stack ContT on top of State (or StateT if you need more monads) and have a state (i.e. get/put) and the yield. Great! That helped a lot. I'm attaching a ConT yield implementation and another one which also handles a return statement with a different type. Hope someone finds them useful. Thanks! Marcin Kosiba {-# OPTIONS -XNoMonomorphismRestriction #-} module Main where import Control.Monad import Control.Monad.Cont import Control.Monad.State import Control.Monad.Identity data (Monad m) = RecPair m a b = Nil | RP (b, a - m (RecPair m a b)) yield :: (Monad m) = r - ContT (RecPair m a r) m a yield x = ContT $ \k - return $ RP(x, k) f'cps = return 2 test = do x - f'cps yield x yield (x + 1) return () testSt :: (MonadState s m, Num s) = ContT (RecPair m a s) m () testSt = do y - f'cps v - get put (y + 1) yield v v' - get yield v' return () getRP :: RecPair Identity a a - [a] getRP Nil = [] getRP (RP (b, f)) = b : (getRP $ runIdentity $ f b) runYield :: ContT (RecPair m a1 b) Identity a - RecPair m a1 b runYield f = runIdentity $ runContT f (\_ - return Nil) --result is [2,3] runTest = getRP $ runYield test getRPSt :: (RecPair (State t) a a, t) - [a] getRPSt (Nil, _) = [] getRPSt (RP (b, f), s) = b : (getRPSt $ runState (f b) s) runYieldSt :: (Num s) = s - (RecPair (State s) a s, s) runYieldSt iState = runState (runContT testSt (\_ - return Nil)) iState --result is [iState, 3] runTestSt iState = getRPSt $ runYieldSt iState{-# OPTIONS -XNoMonomorphismRestriction #-} module Main where import Control.Monad import Control.Monad.Cont import Control.Monad.State import Control.Monad.Identity data (Monad m) = RecPair m a b r = Nil r | RP (b, a - m (RecPair m a b r)) yield :: (Monad m) = r - ContT (RecPair m a r v) m a yield x = ContT $ \k - return $ RP(x, k) f'cps = return 2 test = do x - f'cps yield x yield (x + 1) return (-1) testSt = do y - f'cps v - get put (y + v) yield v testSt getRP :: RecPair Identity a a a - [a] getRP (Nil x) = [x] getRP (RP (b, f)) = b : (getRP $ runIdentity $ f b) runYield :: ContT (RecPair m a1 b a) Identity a - RecPair m a1 b a runYield f = runIdentity $ runContT f (\x - return $ Nil x) --result is [2,3, -1] runTest = getRP $ runYield test getRPSt :: (RecPair (State t) v v v, t) - [v] getRPSt (Nil x, _) = [x] getRPSt (RP (b, f), s) = b : (getRPSt $ runState (f b) s) runYieldSt :: ContT (RecPair m a1 b a) (State s) a - s - (RecPair m a1 b a, s) runYieldSt f iState = runState (runContT f (\x - return $ Nil x)) iState --result is [iState, iState + 2..] runTestSt iState = getRPSt $ runYieldSt testSt iState signature.asc Description: This is a digitally signed message part. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] broken IO support in uvector package, when using non primitive types
I'm sorry, I didn't mean to imply otherwise. I can see your point, but maybe it would be even more flexible in that kind of situation to keep a separate UIO-like API that allows one to explicitly request a particular size? For your large dataset, you could specify the entire filesize (divided by the size of your elements, maybe) while Manlio could take care of storing his array sizes in a different form if necessary. The semantics of UIO loading could then become I have a large chunk of data on a handle that I need to load verbatim; tell me how much to load, which would work on pipes, sockets, and other non-file sources, as well as still being useful in your case of having enormous amounts of data. Anyway, I'm clearly in no position to be deciding on significant API changes for uvector, but having more than one option in a high-performance library like this seems like a good thing. Cheers, Dan On Sat, Mar 14, 2009 at 1:20 PM, Malcolm Wallace malcolm.wall...@cs.york.ac.uk wrote: The main issue seems to be that although the semantics of UIO may be arbitrary, Wallace's patch actually broke deserialization for any production-based UArr, and I'm not sure the benefits are worthwhile (loading a file someone else sent you) given that endianness is already not taken into account when loading (so the chances of someone giving you a raw binary file that happens to contain values of the correct endianness is rather low, it seems). In my experience, having written several libraries in Haskell for serialisation and deserialisation, it is highly problematic when a library writer decides that all data to be stored began its life in Haskell, and is only being serialised in order to be read back in again by the same Haskell library. I have already made that mistake myself in two different libraries now, eventually regretting it (and fixing it). The real utility of serialisation is when it is possible to read data from any arbitrary external source, and to write data according to external standards. A library that can only read and write data in its own idiosyncratic format is not production-ready at all. This is why I submitted the patch that enables the uvector library to read raw binary data that was not produced by itself. I had 300Gb of data from an external source that I needed to deal with efficiently, and uvector was the ideal candidate apart from this small design flaw. And yes, my code also had to deal with endianness conversion on this data. Regards, Malcolm ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hashing over equivalence classes
For the second case you might be able to come up with a commutative hash-combiner function for and ||. For the lambda-term situation, I can think of a couple ways to hash that give what you want. (1) Ignore variable names altogether while hashing; this gives you what you want but has the disadvantage that (\a b. a) and (\a b. b) hash to the same value. (2) Hash the term with de Bruijn indices. But this is the same as hash the canonical element. I don't see that you have much other choice, though. Fortunately, due to laziness, hash . canonicalize should not have much worse space behavior than just hash. Did you have something else in mind? -- ryan On Sat, Mar 14, 2009 at 3:51 AM, Roman Cheplyaka r...@ro-che.info wrote: Are there some known ways to define hashing (or any other) functions over equivalence classes? I.e. a ~ b = hash(a) == hash(b) where (~) is some equivalence relation. For example, you might want to hash lambda terms modulo alpha-equivalence or hash logical terms with respect to commutativity of () and (||). Often we can choose 'canonical' element from each class and hash it. But (at least, in theory) it's not necessary. So, are there (practical) ways to define hash function without it? -- Roman I. Cheplyaka :: http://ro-che.info/ Don't let school get in the way of your education. - Mark Twain ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
On Saturday 14 March 2009 1:07:01 pm Conor McBride wrote: But this... 2) A family of singleton types int(n) parameterized by the static type. For instance, int(5) is the type that contains only the run-time value 5. 3) An existential around the above family for representing arbitrary integers. ...I like less. Well, I don't actually like it, either. Finding this out rather disappointed me. I'd certainly agree that ATS demonstrates the benefits of moving in this direction, but I think we can go further than you suggest, avoiding dead ends in the design space, and still without too much shaking up. I don't think the duplicate-or-plunge dilemma you pose exhausts the options. At least, I seem no reason to presume so and I look forward to finding out! I didn't mean to suggest that ATS is the best you can do. Merely that you can still get a lot done without going very far beyond what is technically possible (though not enjoyable) in GHC today (to the point that people will actually categorize your language as dependently typed when it merely has a type language comparable in richness and convenience to the value level, but is still mostly separate). So adding more faux dependent typing (like ATS or Omega) isn't just wasting time when we should be figuring out how to compile Agda efficiently, because simply making type-level programming easy, while maintaining a strict divide between values and types may well be good enough in practice, until the Agdas and Epigrams come into their own. If you can do better than ATS, and have value-level stuff automatically reproduced at the type level and suchlike, so much the better. I fully support that. :) -- Dan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Alternative to Data.Binary
Don Stewart ha scritto: grzegorz.chrupala: Hi all, Is there a serialization library other than the Data.Binary from hackage? I am using Data.Binary in a couple of projects, but I have found its stack and memory usage very hard to control. Its very common that decoding a map or list of non-trivial size uses up all available RAM, or causes a stack overflow. [...] Have you tried the latest release, which modified the Map and [a] instances? Tried right now. My [UArr (Word32 :*: Word8)], where the list length is 17770, now requires 660 MB of memory, when decoding, against 840 MB with the previous version of the binary package. This is fantastic, thanks! Regards Manlio ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Alternative to Data.Binary
Grzegorz Chrupala grzegorz.chrup...@computing.dcu.ie wrote: Hi all, Is there a serialization library other than the Data.Binary from hackage? There are Iteratees[1]. They're still grok-in-process___ community-wise They are (afaik) currently only used for file input, though it's certainly possible to enumerate a data structure and have an iteratee do the output. Hmmm. Output-iteratees are a very, very, interesting thing to think about. [3] Anyway, they'd almost certainly annihilate all your worries about resource usage. [1] The original: http://okmij.org/ftp/Haskell/Iteratee/ Cabalised and improved (e.g. seek performance, identifier capitalisation): http://inmachina.net/~jwlato/haskell/iteratee/ Monadic Regions[2] might shed some light on Oleg's train of thought regarding all this [2] http://www.cs.rutgers.edu/~ccshan/capability/region-io.pdf [3] Which is just another way of saying I'd rather shut up now, let me try it first -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to catch error in array index when debugging
Claus None of which is satisfactory. You might also want to add Claus yourself to this ticket: Clausindex out of range error message regression Claus http://hackage.haskell.org/trac/ghc/ticket/2669 How do I do that? Ghc Trac's idea of voting is by adding yourself to the cc, so that tickets can be sorted by length of cc list: http://hackage.haskell.org/trac/ghc/report/17 That is often subverted by closing tickets as duplicate/related, without transferring the cc list to the one ticket that is kept;-) Apart from the immediate bug of not getting any information, there's also the more general issue of wanting information about the call site (who called which operation, leading to the exception). A solution to that issue has been sought for a long time, but there seem to be so many options that the discussion has slowed down to a halt: Lexical call site string http://hackage.haskell.org/trac/ghc/ticket/960 Maintaining an explicit call stack http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack Using your own wrappers to give you the missing information is probably the best short-term workaround, but it is no fun. Something like this, perhaps: import qualified Data.Array.IArray as A import Control.Exception arr ! index = mapException (addErrorInfo ( ! ++show index)) $ arr A.! index arr // idxs = mapException (addErrorInfo ( // ++show idxs)) $ arr A.// idxs addErrorInfo info (ErrorCall str) = ErrorCall (str++:++info) test1 i = (A.array (1,5) [(i,i)|i-[1..5]] :: A.Array Int Int) ! i test2 i = (A.array (1,5) [(i,i)|i-[1..5]] :: A.Array Int Int) // [(i,0)] *Main test1 0 *** Exception: Error in array index: ! 0 *Main test1 3 3 *Main test1 8 *** Exception: Error in array index: ! 8 *Main test2 0 array *** Exception: Error in array index: // [(0,0)] *Main test2 4 array (1,5) [(1,1),(2,2),(3,3),(4,0),(5,5)] *Main test2 7 array *** Exception: Error in array index: // [(7,0)] Claus ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Proof of induction case of simple foldl identity.
Can someone provide the induction-case proof of the following identity: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y If foldl is defined as usual: foldl :: (b - a - b) - b - [a] - b foldl f e [] = e foldl f e (x : xs) = myFoldl f (f e x) xs The first two cases, _|_ and [], are trivial. Here's my best attempt at the (y : ys) case (the left and right sides reduce to expressions that are obviously equal, but I don't know how to show it): Case (y : ys). Left-side reduction: foldl (-) ((-) x y) (y : ys) = {second equation of foldl} foldl (-) ((-) ((-) x y) y) ys = {notation} foldl (-) ((-) (x - y) y) ys = {notation} foldl (-) ((x - y) - y) ys = {arithmetic} foldl (-) (x - 2 * y) ys Right-side reduction: (foldl (-) x (y : ys)) - y = {second equation of foldl} (foldl (-) ((-) x y) ys) - y = {induction hypothesis: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y} ((foldl (-) x ys) - y) - y = {arithmetic} (foldl (-) x ys) - 2 * y Thanks as always. _ Express your personality in color! Preview and select themes for Hotmail®. http://www.windowslive-hotmail.com/LearnMore/personalize.aspx?ocid=TXT_MSGTX_WL_HM_express_032009#colortheme___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] building c-callable shared objects and dlls with Haskell
Hello, What's the current best practice to build shared objects / dlls to call Haskell from C? I think I figured out the basics, but I would like to understand the picture completely and document the current state of affairs in the Haskell wiki. In particular, I'd like to make sure that the shared libraries are usable from multiple applications, and ideally that applications can dynamically link to multiple shared libraries built from Haskell. I was trying to do that to call Pandoc from C, and the information was rather scattered. Here are two links that I used: http://www.haskell.org/haskellwiki/Calling_Haskell_from_C http://www.haskell.org/ghc/docs/latest/html/users_guide/win32-dlls.html Based on that information, I was able to compile and run small examples. However, I'm not sure I completely understand how it works in production. Specifically, for the UNIX version, I followed a recipe for loading and unloading the Haskell runtime with library constructors/destructors: #include HsFFI.h static void library_init(void) __attribute__((constructor)); static void library_init(void) { hs_init(..); } static void library_exit(void) __attribute__((destructor)); static void library_exit(void) { hs_exit(); } Does this work correctly? Another concern is that every library compiled this way will include its own copy of the Haskell runtime. Will there be problems when using multiple libraries? For the Windows counterpart, there is this very suspicious code I adapted from the GHC user guide: #include windows.h #include Rts.h extern void __stginit_LibPandoc(void); BOOL STDCALL DllMain( HANDLE hModule, DWORD reason, void* reserved) { if (reason == DLL_PROCESS_ATTACH) { static char* args[] = { ghcDll, NULL }; startupHaskell(1, args, __stginit_LibPandoc); } return TRUE; } It does not seem to unload the Haskell runtime, indeed the Guide warns that it is unsafe to do so from DllMain. What are the implications of this? Thanks! --A ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
the nearly dependent near future, was Re: [Haskell-cafe] Microsoft PhD Scholarship at Strathclyde
Hi Dan On 14 Mar 2009, at 18:48, Dan Doel wrote: On Saturday 14 March 2009 1:07:01 pm Conor McBride wrote: I don't think the duplicate-or-plunge dilemma you pose exhausts the options. At least, I seem no reason to presume so and I look forward to finding out! I didn't mean to suggest that ATS is the best you can do. Merely that you can still get a lot done without going very far beyond what is technically possible (though not enjoyable) in GHC today (to the point that people will actually categorize your language as dependently typed when it merely has a type language comparable in richness and convenience to the value level, but is still mostly separate). I'd certainly agree with that assessment. [..] If you can do better than ATS, and have value-level stuff automatically reproduced at the type level and suchlike, so much the better. I fully support that. :) Good! 'Cos that's what I'm going for. It certainly won't involve types depending on arbitrary expressions -- or even on run-time data in the first instance -- but by taking the approach of lifting what we can from types to kinds and from values to types, we keep those options open, as well as hiding the cruft. Note: subject about to slide into something a tad more technical, but I gotta tell you this one... To echo your remarks above, I'm pretty sure one could go quite far even with something as non-invasive as a GHC preprocessor. My opening gambit would be to extend the grammar of kinds as follows, with a translation (#) back to current kinds: kind ::= * #* = * | kind - kind #(j - k) = #j - #k | {type}#{t}= * | forall x :: kind. kind#(forall x :: j. k) = #k Note that this forall abstracts type-level stuff in a given kind, not kinds themselves, so the bound variable can only occur inside the {..} used to lift types up to kinds. Correspondingly, when we smash the {t} kinds all to *, we can dump the polymorphism. Now populate the {t} kinds by lifting expressions to the type level: these look like {e} where e :: t is built from fully applied constructors and variables. That's certainly a total fragment! The preprocessor will need to cook up the usual bunch of gratuitous type constructors, one for each data constructor used inside {..} in order to translate expressions to types. It will need to perform tighter checks on class and data declarations (ideally by constructing objects one level down which express an equivalent typing problem) but GHC already has the necessary functionality to check programs. It might be possible to cut down on the amount of {..} you need to festoon your code with. On the other hand, it might be good to delimit changes of level clearly, especially given existing namespace policies. It's not much but it's a start, it's super-cheap, and it's compatible with a much more sophisticated future. I program in this language already, then I comment out the kinds and types I want and insert their translations by hand. At the moment, I have to write types like data Case :: ({a} - *) - ({b} - *) - {Either a b} - * where InL :: f {a} - Case f g {Left a} InR :: g {b} - Case f g {Right b} rather than programs like, er... either. But perhaps we could also figure out how to translate either to a type function. However, the numeric constraints project will need more than a preprocessor, because it delivers specific new constraint-solving functionality during type inference, rather than just the precooking of first-order value unification problems as first-order type unification problems. Indeed, with rank-n types, it could be quite fun. I'm sure there are devils yet in details, but the prospects for not-quite-dependent types by re-use rather than hard labour look quite good, and forwards-compatible with yer actual dependent functions, etc, further down the line. We live in interesting times! All the best Conor ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] What's new on Hackage this week? March 14, 2009
http://archhaskell.wordpress.com/2009/03/14/arch-haskell-news-mar-14-2009/ A regular update of Haskell in Arch Linux Arch now has 974 Haskell packages in AUR. That’s 12 new packages this week, and lots of updates as well. See the blog for the full list of updates. -- Don ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proof of induction case of simple foldl identity.
At 8:45 PM + 3/14/09, R J wrote: Can someone provide the induction-case proof of the following identity: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y If foldl is defined as usual: foldl :: (b - a - b) - b - [a] - b foldl f e [] = e foldl f e (x : xs) = myFoldl f (f e x) xs The first two cases, _|_ and [], are trivial. Here's my best attempt at the (y : ys) case (the left and right sides reduce to expressions that are obviously equal, but I don't know how to show it): Case (y : ys). Careful. Your introduced variables y and ys already appear in the equation to be proved. Try introducing fresh variables. Left-side reduction: foldl (-) ((-) x y) (y : ys) = {second equation of foldl} foldl (-) ((-) ((-) x y) y) ys = {notation} foldl (-) ((-) (x - y) y) ys = {notation} foldl (-) ((x - y) - y) ys = {arithmetic} foldl (-) (x - 2 * y) ys Right-side reduction: (foldl (-) x (y : ys)) - y = {second equation of foldl} (foldl (-) ((-) x y) ys) - y = {induction hypothesis: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y} ((foldl (-) x ys) - y) - y = {arithmetic} (foldl (-) x ys) - 2 * y Thanks as always.___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proof of induction case of simple foldl identity.
Am Samstag, 14. März 2009 21:45 schrieb R J: Can someone provide the induction-case proof of the following identity: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y If foldl is defined as usual: foldl :: (b - a - b) - b - [a] - b foldl f e [] = e (R)foldl f e (x : xs) = foldl f (f e x) xs The first two cases, _|_ and [], are trivial. Here's my best attempt at the (y : ys) case (the left and right sides reduce to expressions that are obviously equal, but I don't know how to show it): Case (y : ys). Left-side reduction: foldl (-) ((-) x y) (y : ys) = {second equation of foldl} foldl (-) ((-) ((-) x y) y) ys = {notation} foldl (-) ((-) (x - y) y) ys = {notation} foldl (-) ((x - y) - y) ys = {arithmetic} foldl (-) (x - 2 * y) ys Right-side reduction: (foldl (-) x (y : ys)) - y = {second equation of foldl} (foldl (-) ((-) x y) ys) - y = {induction hypothesis: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y} ((foldl (-) x ys) - y) - y = {arithmetic} (foldl (-) x ys) - 2 * y Thanks as always. Consider a one-element list. foldl (-) (x-y) [z] = (x-y)-z (foldl (-) x [z]) - y = (x-z)-y So a necessary condition for the identity to generally hold is that the Num instance obeys the law (L) forall u v w. (u - v) - w = (u - v) - w Then take as inductive hypothesis that zs is a list such that forall a b. foldl (-) (a-b) zs = (foldl (-) a zs) - b Let z be an arbitrary value of appropriate type, x and y too. Then foldl (-) (x - y) (z:zs) = foldl (-) ((x-y)-z) zs (R) = (foldl (-) (x-y) zs) - z (IH, a = x-y, b = z) = ((foldl (-) x zs) - y) - z (IH, a = x, b = y) = ((foldl (-) x zs) - z) - y (L, u = foldl (-) x zs, v = y, w = z) = (foldl (-) (x-z) zs) - y (IH, a = x, b = z) = (foldl (-) x (z:zs)) - y (R) The trick is to formulate the inductive hypothesis with enough generality, so you have strong foundations to build on. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Natural Numbers: Best implementation?
On Fri, 13 Mar 2009, Mark Spezzano wrote: 1. Don’t bother. Just use Integer. 2. Use the type data Natural = Zero | Succ !Natural 3. Use the following definition taken from the Gentle Introduction to Haskell 98 newtype Natural = MakeNatural Integer This option looks like non-negative package. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proof of induction case of simple foldl identity.
On Saturday 14 March 2009, Daniel Fischer wrote: Am Samstag, 14. März 2009 21:45 schrieb R J: Can someone provide the induction-case proof of the following identity: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y If foldl is defined as usual: foldl :: (b - a - b) - b - [a] - b foldl f e [] = e (R)foldl f e (x : xs) = foldl f (f e x) xs The first two cases, _|_ and [], are trivial. Here's my best attempt at the (y : ys) case (the left and right sides reduce to expressions that are obviously equal, but I don't know how to show it): Case (y : ys). Left-side reduction: foldl (-) ((-) x y) (y : ys) = {second equation of foldl} foldl (-) ((-) ((-) x y) y) ys = {notation} foldl (-) ((-) (x - y) y) ys = {notation} foldl (-) ((x - y) - y) ys = {arithmetic} foldl (-) (x - 2 * y) ys Right-side reduction: (foldl (-) x (y : ys)) - y = {second equation of foldl} (foldl (-) ((-) x y) ys) - y = {induction hypothesis: foldl (-) ((-) x y) ys = (foldl (-) x ys) - y} ((foldl (-) x ys) - y) - y = {arithmetic} (foldl (-) x ys) - 2 * y Thanks as always. Consider a one-element list. foldl (-) (x-y) [z] = (x-y)-z (foldl (-) x [z]) - y = (x-z)-y So a necessary condition for the identity to generally hold is that the Num instance obeys the law (L) forall u v w. (u - v) - w = (u - v) - w Typo? :) (L) forall u v w. (u - v) - w = (u - w) - v Then take as inductive hypothesis that zs is a list such that forall a b. foldl (-) (a-b) zs = (foldl (-) a zs) - b Let z be an arbitrary value of appropriate type, x and y too. Then foldl (-) (x - y) (z:zs) = foldl (-) ((x-y)-z) zs (R) = (foldl (-) (x-y) zs) - z (IH, a = x-y, b = z) = ((foldl (-) x zs) - y) - z (IH, a = x, b = y) = ((foldl (-) x zs) - z) - y (L, u = foldl (-) x zs, v = y, w = z) = (foldl (-) (x-z) zs) - y (IH, a = x, b = z) = (foldl (-) x (z:zs)) - y (R) The trick is to formulate the inductive hypothesis with enough generality, so you have strong foundations to build on. -- Pozdrawiam, Marcin Kosiba signature.asc Description: This is a digitally signed message part. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: How to catch error in array index when debugging
Colin Paul Adams wrote: Adrian == Adrian Neumann aneum...@inf.fu-berlin.de writes: Adrian You can use the ghci debugger http://www.haskell.org/ghc/docs/latest/html/users_guide/ghci- Adrian debugger.html Adrian it can set breakpoints on exceptions. So i tried adding the -fbreak-on-error flag. It made no difference - it still exited: interactive: Error in array index interactive: interrupted interactive: warning: too many hs_exit()s IIRC, this is because you are catching exceptions at some higher level where you actually find out that this kind of an exception is not handled and give up. If your application is Gtk2Hs then this library will do the catching you do not like in this case. Try to use -fbreak-on-exception instead of -fbreak-on-error. If you try this you may need to set break-on-exception late enough in the execution (use some breakpoint) so that you are not stopping at places which are ok (are correctly handled in an exception handler). In addition to other solutions mentioned here you can also setup conditional breakpoints in GHCi because GHCi breakpoints can be scripted. This is what I mostly use. Most of my debugging is done by writing GHCi debug scripts. Some ideas how to do it are here: http://permalink.gmane.org/gmane.comp.lang.haskell.glasgow.user/16270 It does not work well for interactive command line applications now, but it will work well for them when GHC 6.10.2 is out (the three most critical patches were merged). This alternative may have steep learning curve for such a simple thing but once mastered it works well. Peter. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proof of induction case of simple foldl identity.
Am Samstag, 14. März 2009 22:44 schrieb Marcin Kosiba: (L) forall u v w. (u - v) - w = (u - v) - w Typo? :) (L) forall u v w. (u - v) - w = (u - w) - v Sure. Thanks for spotting it. I had (x-y)-z = (x-z)-y first, then decided it would be better to use different variable names... ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Style Guide for Haskell Code
On Sat, 14 Mar 2009, Martijn van Steenbergen wrote: Henning Thielemann wrote: Of course, style is a matter of taste. So there are as many good styles as programmers and there won't be an official style guide, I'm afraid. While that is true, it's no valid reason to not have a style guide. Sun's style guide for Java has been very successful, improving legibility of Java code in general. Noone has to adhere to it, but most people do anyway. I'm glad about everyone who follows the tips I have written in Category:Style, but I can by no way call them official. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Natural Numbers: Best implementation?
On Fri, 13 Mar 2009, Wolfgang Jeltsch wrote: Am Freitag, 13. März 2009 09:21 schrieb Roman Cheplyaka: * Alexander Dunlap alexander.dun...@gmail.com [2009-03-12 20:01:57-0700] Also, a lot of functions just take Integers so it would be more of a pain to use. AFAIK there are very few fuctions that take Integers. Many functions take instances of Integral, but it's not a problem to make your own type such an instance. I’d say that it is a problem. Class instances should satisfy certain laws. (Although these laws are often not stated explicitely, they are assumed to hold by users of the class and they should hold to make the instance sensible.) In the case of Num, I would expect num + negate num to equal num. equal zero ? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Has anybody replicated =~ s/../../ or even something more basic for doing replacements with pcre haskell regexen?
So, I tweaked Text.Regex to have the behavior I need. http://patch-tag.com/repo/haskell-learning/browse/regexStuff/pcreReplace.hs FWIW, the problem I was trying to solve was deleting single newlines but not strings of newlines in a document. Dead simple for pcre-regex with lookaround. But, I think, impossible with posix regex. -- replace single newlines, but not strings of newlines (requires pcre look-around (lookaround, lookahead, lookbehind, for googlebot)) http://perldoc.perl.org/perlre.html testPcre = ( subRegex (mkRegex (?!\n)\n(?!\n)) asdf\n \n\n\nadsf ) == asdf \n\n\nadsf Can I lobby for this to make its way into the Regex distribution? Really, I would argue that every regex flavor should have all the functions that Text.Regex get, not just posix. (subRegex is just the most important, to my mind) Otherwise I'll make my own RegexHelpers hackage package or something. Hard for me to see how to do this in an elegant way since the pcre packages are so polymorphic-manic. I'm sure there is a way though. Or if you point me to the darcs head of regex I'll patch that directly. 2009/3/14 Thomas Hartman tphya...@gmail.com: Right, I'm just saying that a subRegex that worked on pcre regex matches would be great for people used to perl regexen and unused to posix -- even it only allowed a string replacement, and didn't have all the bells and whistles of =~ s../../../ in perl. 2009/3/12 ChrisK hask...@list.mightyreason.com Thomas Hartman wrote: Is there something like subRegex... something like =~ s/.../.../ in perl... for haskell pcre Regexen? I mean, subRegex from Text.Regex of course: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/regex-compat Thanks for any advice, thomas. Short answer: No. This is a FAQ. The usual answer to your follow up Why not? is that the design space is rather huge. Rather than justify this statement, I will point at the complicated module: http://hackage.haskell.org/packages/archive/split/0.1.1/doc/html/Data-List-Split.html The above module is a wide range of strategies for splitting lists, which is a much simpler problem than your subRegex request, and only works on lists. A subRegex library should also work on bytestrings (and Seq). At the cost of writing your own routine you get exactly what you want in a screen or less of code, see http://hackage.haskell.org/packages/archive/regex-compat/0.92/doc/html/src/Text-Regex.html#subRegex for subRegex which is 30 lines of code. Cheers, Chris ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] State monad is missing Applicative instance
On Sat, 14 Mar 2009, Peter Verswyvelen wrote: I was using the transformers but still had to implement the Applicative instance of State This package contains an applicative instance for StateT but not for State In 'transformers' State is a type synonym for StateT Identity and thus does not need an own instance declaration. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Has anybody replicated =~ s/../../ or even something more basic for doing replacements with pcre haskell regexen?
Also, consider stealing the regex susbt code from: http://shootout.alioth.debian.org/u64q/benchmark.php?test=regexdnalang=ghcid=4 tphyahoo: So, I tweaked Text.Regex to have the behavior I need. http://patch-tag.com/repo/haskell-learning/browse/regexStuff/pcreReplace.hs FWIW, the problem I was trying to solve was deleting single newlines but not strings of newlines in a document. Dead simple for pcre-regex with lookaround. But, I think, impossible with posix regex. -- replace single newlines, but not strings of newlines (requires pcre look-around (lookaround, lookahead, lookbehind, for googlebot)) http://perldoc.perl.org/perlre.html testPcre = ( subRegex (mkRegex (?!\n)\n(?!\n)) asdf\n \n\n\nadsf ) == asdf \n\n\nadsf Can I lobby for this to make its way into the Regex distribution? Really, I would argue that every regex flavor should have all the functions that Text.Regex get, not just posix. (subRegex is just the most important, to my mind) Otherwise I'll make my own RegexHelpers hackage package or something. Hard for me to see how to do this in an elegant way since the pcre packages are so polymorphic-manic. I'm sure there is a way though. Or if you point me to the darcs head of regex I'll patch that directly. 2009/3/14 Thomas Hartman tphya...@gmail.com: Right, I'm just saying that a subRegex that worked on pcre regex matches would be great for people used to perl regexen and unused to posix -- even it only allowed a string replacement, and didn't have all the bells and whistles of =~ s../../../ in perl. 2009/3/12 ChrisK hask...@list.mightyreason.com Thomas Hartman wrote: Is there something like subRegex... something like =~ s/.../.../ in perl... for haskell pcre Regexen? I mean, subRegex from Text.Regex of course: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/regex-compat Thanks for any advice, thomas. Short answer: No. This is a FAQ. The usual answer to your follow up Why not? is that the design space is rather huge. Rather than justify this statement, I will point at the complicated module: http://hackage.haskell.org/packages/archive/split/0.1.1/doc/html/Data-List-Split.html The above module is a wide range of strategies for splitting lists, which is a much simpler problem than your subRegex request, and only works on lists. A subRegex library should also work on bytestrings (and Seq). At the cost of writing your own routine you get exactly what you want in a screen or less of code, see http://hackage.haskell.org/packages/archive/regex-compat/0.92/doc/html/src/Text-Regex.html#subRegex for subRegex which is 30 lines of code. Cheers, Chris ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Has anybody replicated =~ s/../../ or even something more basic for doing replacements with pcre haskell regexen?
On 2009 Mar 14, at 19:01, Thomas Hartman wrote: FWIW, the problem I was trying to solve was deleting single newlines but not strings of newlines in a document. Dead simple for pcre-regex with lookaround. But, I think, impossible with posix regex. s/(^|[^\n])\n($|[^\n])/\1\2/g; POSIX regexen may be ugly, but they're capable. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu electrical and computer engineering, carnegie mellon universityKF8NH PGP.sig Description: This is a digitally signed message part ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Does anybody dislike implicit params as much asI do?
Sittampalam, Ganesh wrote: I think they have a useful place in propagating semi-global configuration information without imposing huge syntactic overhead. Right, for instance, type MyMonad a = (?env :: Env) = IO a No lift needed! I was hoping to use IPs to do OO-style implicit type conversion from a derived type to base type. For instance: type Base = forall a. ((?f1 :: Int) = a) - a field1 :: Base - Int field1 b = b ?f1 type Derived = forall a. ((?f1 :: Int, ?f2 :: String) = a) - a d :: Derived d x = let {?f1 = 3;?f2 = Hello} in x f1d :: Int f1d = field1 d Annoyingly, GHC objects to the field1 d application. -- Ashley Yakeley ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] RE: Hashing over equivalence classes
Hi Roman, So are you really talking about an equivalence relation on the function's domain? The reason I ask is that it is well known that f(a)=f(b) establishes an equivalence relation on f's co-domain! Vasili ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Design Patterns by Gamma or equivalent
Mark Spezzano wrote: Because Haskell is not OO, it is functional, I was wondering if there is some kind of analogous “design pattern”/”template” type concept that describe commonly used functions that can be “factored out” in a general sense to provide the same kind of usefulness that Design Patterns do for OOP. Basically I’m asking if there are any kinds of “common denominator” function compositions that are used again and again to solve problems. If so, what are they called? Most of the (particular) problems OO design patterns solve are non-issues in Haskell because the language is more expressive. The issues giving rise to patterns like Visitor and Factory are non-issues, so their solutions are trivial. A number of other patterns can actually be written down once and for all (in higher-order functions like foldr, map,...) instead of needing repetition. But that's not to say we don't have our own expressiveness deficiencies ;) The analogous term for the genre is functional pearl, though the individual pearls don't tend to be as codified as in OOP. One example is using coproducts for open unions[1] which solves the dual problem to Visitor. Another is using open recursion[2]. A third example along the same track is using two-level types[3]. But again a lot of the patterns, once discovered, get turned into libraries that can be used off the shelf: e.g. the two-continuation solution for efficient and fair backtracking search[4], and list-fusion techniques[5]. There also a number of idioms which are similar in scope to the idioms that arise in other languages: using tail recursion, accumulators, continuation-passing transformations, closures over recursion[6], Schwartzian transforms, etc. And then there are some things like monoids which fall somewhere between idiom and pearl. Some specific OO patterns have direct analogues in defunctionalization[7]. For the most part defunctionalization is something best left to the compiler, but on occasion we want to be explicit about it and this too is an idiom/pearl border case IMO. And, of course, there's the entire cottage industry of using category theory for fun and profit. [1] http://www.cs.nott.ac.uk/~wss/Publications/DataTypesALaCarte.pdf [2] http://www.cs.utexas.edu/~wcook/Drafts/2006/MemoMixins.pdf [3a] http://web.cecs.pdx.edu/~sheard/papers/generic.ps [3b] http://web.cecs.pdx.edu/~sheard/papers/JfpPearl.ps [4a] http://okmij.org/ftp/papers/LogicT.pdf [4b] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/logict [5a] http://www.cse.unsw.edu.au/~dons/papers/CSL06.html [5b] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/bytestring [5c] http://www.cse.unsw.edu.au/~dons/papers/CLS07.html [5d] http://hackage.haskell.org/cgi-bin/hackage-scripts/package/stream-fusion [5e] http://homepages.inf.ed.ac.uk/wadler/papers/vanish/vanish.pdf [6] For lack of a better name. I mean doing this: -- Here the a,b,c,d variables are captured in the closure for go recursive a b c d = go where go 0 = ...a...b... go n = ...c...d...go (n-1) instead of this: -- ...instead of needing to be passed through the recursion each time recursive a b c d 0 = ...a...b... recursive a b c d n = ...c...d...recursive a b c d (n-1) [7a] http://blog.plover.com/prog/defunctionalization.html -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Alternative to Data.Binary
2009/3/15 Achim Schneider bars...@web.de Hmmm. Output-iteratees are a very, very, interesting thing to think about. Careful, comments like that have a tendency to invoke Oleg. Next thing you know: Interesting Question -- Maybe you could... ^ | | | | v Sound of heads popping -- Oleg: Yes you can, *and*... signature.asc Description: OpenPGP digital signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] I can't install hs-plugins beceause of Linker.h
Hi! I was trying to install hs-plugins both from sources $./Setup.lhs ... and with $cabal fetchcabal install and both failed. It always reports that Linker.h is missing. I have tried to use --extra-include-dirs with path to ghc-6.10.1 dir and ghc-6.10.1/include dir. Nothing changed. Here is a report from Setup.lhs: Setup.lhs:2:2: Warning: In the use of `defaultUserHooks' (imported from Distribution.Simple): Deprecated: Use simpleUserHooks or autoconfUserHooks, unless you need Cabal-1.2 compatibility in which case you must stick with defaultUserHooks Warning: defaultUserHooks in Setup script is deprecated. Configuring plugins-1.4.0... Warning: This package indirectly depends on multiple versions of the same package. This is highly likely to cause a compile failure. package ghc-6.10.1 requires Cabal-1.6.0.1 package plugins-1.4.0 requires Cabal-1.6.0.2 checking build system type... i386-apple-darwin9.6.0 checking for ghc... ghc checking for value of __GLASGOW_HASKELL__... 610 checking for ghc library directory... /opt/local/lib/ghc-6.10.1/. checking for gcc... gcc checking for C compiler default output file name... a.out checking whether the C compiler works... yes checking whether we are cross compiling... no checking for suffix of executables... checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ANSI C... none needed checking for arc4random... yes checking for a BSD-compatible install... /usr/bin/install -c configure: creating ./config.status config.status: creating config.mk config.status: creating testsuite/makewith/io/TestIO.conf config.status: creating testsuite/makewith/unsafeio/Unsafe.conf config.status: creating config.h config.status: config.h is unchanged Setup.lhs: Missing dependency on a foreign library: * Missing header file: Linker.h This problem can usually be solved by installing the system package that provides this library (you may need the -dev version). If the library is already installed but in a non-standard location then you can use the flags --extra-include-dirs= and --extra-lib-dirs= to specify where it is. -- Yuri S. Kashnikov Novosibirsk State University, Russia 2 Pirogova street 630090, Novosibirsk-90 yuri.kashnik...@gmail.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Design Patterns by Gamma or equivalent
wren: There also a number of idioms which are similar in scope to the idioms that arise in other languages: using tail recursion, accumulators, continuation-passing transformations, closures over recursion[6], Schwartzian transforms, etc. [6] For lack of a better name. I mean doing this: -- Here the a,b,c,d variables are captured in the closure for go recursive a b c d = go where go 0 = ...a...b... go n = ...c...d...go (n-1) instead of this: -- ...instead of needing to be passed through the recursion each time recursive a b c d 0 = ...a...b... recursive a b c d n = ...c...d...recursive a b c d (n-1) Static argument transformation: http://hackage.haskell.org/trac/ghc/ticket/888 http://www.haskell.org/pipermail/cvs-ghc/2008-April/041959.html Or more generally worker/wrapper http://www.workerwrapper.com/ Take your pick :) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A systematic method for deriving a defintion of foldl using foldr?
R J wrote: 2. I believe that the reverse implementation--namely, implementing foldr in terms of foldl--is impossible. What's the proof of that? As others have said, foldr in terms of foldl is impossible when infinite lists are taken into account. For finite lists it's easy though: (\f z xs - foldl (\yz y - yz . f y) id xs z) 3. Any advice on how, aside from tons of practice, to develop the intuition for rapidly seeing solutions to questions like these would be much appreciated. The difficulty a newbie faces in answering seemingly simple questions like these is quite discouraging. The solutions to this problem, while seemingly simple, is not so simple that a newbie should get discouraged, IMO. The essential trick here is to come up with the idea of using the fold to build a function, which in turn actually does what you want--- rather than trying to do anything useful with the fold itself. This idea comes in two parts: implementation indirection (let another function do the real work) and continuation-passing (to build the other function). Both of those tricks are ones we use all the time, but the foldr/foldl problem weds them together in a particularly perverse (though deliciously elegant) manner. In order to develop an intuition for the interplay of CPS and recursion, consider this exercise: You have a type for binary trees and you have this function for getting the size of a tree: size (Leaf _) = 1 size (Branch l r) = size l + size r Unfortunately you have very deep trees and so this function gives stack-overflow errors. Rewrite it to be tail recursive. That one's not too hard because the order of recursion doesn't matter (addition is associative and commutative). Now, you have a similar function and you're worried about the fact that repeated list concatenation can take O(n^2) time. Rewrite this one so it's tail recursive--- making sure that the leaves still come out in the right order. If you're familiar with the difference list trick[1] then also rewrite it so you only take O(n) time building the list. toList (Leaf x) = [x] toList (Branch l r) = toList l ++ toList r [1] See http://hackage.haskell.org/cgi-bin/hackage-scripts/package/dlist or look at the ShowS type used in the Prelude for the shows function. -- Live well, ~wren ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe