Re: [Haskell-cafe] Ease of Haskell development on OS X?
On 20 Mar 2009, at 16:56, Mark Spezzano wrote: Hi, I’ve been thinking of changing over to an iMac from my crappy old PC running Windows Vista. Question: Does the iMac have good support for Haskell development? As good as, if not better than other platforms I've found, you get none of the weird problems with libraries that aren't designed to run on windows, and more choice of editors than on linux. Question: What environment setups do people commonly use (e.g. Eclipse Xcode etc)? I use SubEthaEdit and a Terminal window, I know many people that use emacs/vi, and one or two that use TextMate. Question: Are there any caveats I should be aware of before changing systems (i.e. unpleasant surprises). Not particularly – it's mostly like running linux, but without the headaches when things break, and with more choice of software. I want to be able to use the machine for Haskell OpenGL programming. Other than chose the graphics card carefully, an iMac will do you very well. Hope that helps. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Ease of Haskell development on OS X?
On 20 Mar 2009, at 18:08, Don Stewart wrote: tom.davie: Other than chose the graphics card carefully, an iMac will do you very well. Hope that helps. This is very useful. Could the Mac users add information (and screenshots?) to the OSX wiki page, http://haskell.org/haskellwiki/OSX I'm not really sure there's much to add. It mostly just works™. Is there something that can be extracted from this discussion to add to it? Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Ease of Haskell development on OS X?
On 20 Mar 2009, at 18:46, Don Stewart wrote: tom.davie: On 20 Mar 2009, at 18:08, Don Stewart wrote: tom.davie: Other than chose the graphics card carefully, an iMac will do you very well. Hope that helps. This is very useful. Could the Mac users add information (and screenshots?) to the OSX wiki page, http://haskell.org/haskellwiki/OSX I'm not really sure there's much to add. It mostly just works™. Is there something that can be extracted from this discussion to add to it? Imagine you're new to Haskell, or the Mac. What do you need to know to get started developing new Haskell software? Is that information on the page? Thankfully, yes, all you need to know are either (a) go to haskell.org and download it, or (b) download macports, and port install ghc... Now you've got a Haskell environment like any other. I guess I could add a chunk of text about good editors, but I'm not sure if that's suitable, is it? Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell Logo Voting has started!
On 19 Mar 2009, at 11:39, Wolfgang Jeltsch wrote: Am Mittwoch, 18. März 2009 13:31 schrieben Sie: On Wed, Mar 18, 2009 at 04:36, Wolfgang Jeltsch wrote: Am Mittwoch, 18. März 2009 10:03 schrieb Benjamin L.Russell: Just go through the list, choose your top favorite, and assign rank 1 to it; Is rank 1 the best or the worst? The condorcet info page makes it clear that higher is better. http://www.cs.cornell.edu/w8/~andru/civs/rp.html I certainly voted the wrong way round then, and the buttons do the wrong thing – pushing move to top (one would assume meaning put this in the top spot) gives something rank 1. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell Logo Voting has started!
On 17 Mar 2009, at 15:24, Heinrich Apfelmus wrote: Eelco Lempsink wrote: Hi there! I updated a couple of logo versions and ungrouped and regrouped the (former) number 31. Other than that, there was nothing standing in the way of the voting to begin imho, so I started up the competition. By now, I suppose everybody should have received their ballot. If you think you should have received it but didn't, please report it, I can resend the invitation. Also, for people not directly subscribed to the haskell-cafe mailing list, you can still send ballot requests until the end of the competition (March 24, 12:00 UTC). Make sure the message contains 'haskell logo voting ballot request' (e.g. in the subject). Depending on the winner of this voting round we can decide whether we need to continue with variations. Jared Updike already offered to donate a bit of time to help create several variations. But for now, good luck with sorting those options! :) Thanks for organizing this, finally I can choose ... Oh my god! How am I supposed to make a vote? I can barely remember 3 of the 113 logos, let alone memorize that #106 is the narwhal. There are lots of very good or just good candidates and I would like to order them all to my liking, but without instant visual feedback on the voting ballot, this is a hopeless task. Since I have about 10 minutes to spare for voting, I'm just going to pick 5 candidates at random and order these? Actually, I think I prefer to be completely paralyzed by the overwhelming choice instead and not vote at all. Alternatively, it seems that it's possible to upload rankings from a file. But which format? And is there a zip file with the logo proposals so I can try to arrange them via dragdrop in some picture gallery application? A simple majority vote is clearly inadequate for this vote, but I'm afraid that without assisting technology (instant and visual feedback), the voting process will more or less deteriorate to that due to the difficulty of creating quality input votes. I have to agree that the UI for voting is not the best I've ever seen. On the other hand, it's pretty easy to select the few logos that you like, and push them all to the top, select the ones you'd accept, and push them up just below, and finally select the ones you absolutely don't like and push them all the way down. That at least is what I did. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] monadic logo
On 12 Mar 2009, at 15:04, Gregg Reynolds wrote: At risk of becoming the most hated man in all Haskelldom, I'd like to suggest that the Haskell logo not use lambda symbols. Or at least not as the central element. Sorry, I know I'm late to the party, but the thing is there is nothing distinctive about lambda; it's common to all FPLs. Besides, Lisp/Scheme already have that franchise. What is distinctive about Haskell it's use of the monad. The Pythagorean monad symbol is wonderfully simple: No, what's distinctive about Haskell is usually the abuse of the monad. Encouraging people to think Haskell is all about monadic programming even more is a recipe for disaster. Just my 2¢ Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Abuse of the monad [was: monadic logo]
On 12 Mar 2009, at 15:16, Andrew Wagner wrote: Can you expand on this a bit? I'm curious why you think this. For two reasons: Firstly, I often find that people use the Monadic interface when one of the less powerful ones is both powerful enough and more convenient, parsec is a wonderful example of this. When the applicative instance is used instead of the monadic one, programs rapidly become more readable, because they stop describing the order in which things should be parsed, and start describing the grammar of the language being parsed instead. Secondly, It seems relatively common now for beginners to be told about the IO monad, and start writing imperative code in it, and thinking that this is what Haskell programming is. I have no problem with people writing imperative code in Haskell, it's an excellent imperative language. However, beginners seeing this, and picking it up is usually counter productive – they never learn how to write things in a functional way, and miss out on most of the benefits of doing so. Hope that clarifies what I meant :) Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Abuse of the monad
On 12 Mar 2009, at 15:33, Colin Paul Adams wrote: Thomas == Thomas Davie tom.da...@gmail.com writes: Thomas On 12 Mar 2009, at 15:16, Andrew Wagner wrote: Can you expand on this a bit? I'm curious why you think this. Thomas For two reasons: Thomas Firstly, I often find that people use the Monadic Thomas interface when one of the less powerful ones is both Thomas powerful enough and more convenient, parsec is a wonderful Thomas example of this. When the applicative instance is used Thomas instead of the monadic one, programs rapidly become more Thomas readable, because they stop describing the order in which Thomas things should be parsed, and start describing the grammar Thomas of the language being parsed instead. That's interesting. I recently used parsec 3. I wrote using the monadic interface because I could understand (just about) how to do so. I was looking at the examples in RWH, and I could follow the explanation of the monadic interface much easier. Perhaps this was because RWH shows how to write using the monadic interface, and then shows how to convert this to the applicative interface. It's hard to follow a tutorial that shows you how to convert from something you aren't starting with. I suspect that this is an interesting corner case of both my two reasons – firstly, Monads are too powerful here, and secondly, you perhaps found it easier to think about the operational aspects of the parser than to think about the denotation of what the parser should parse. Is that somewhere accurate? Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Loading 3D points normals into OpenGL?
If you were to strip out all texture loading code, then yes, otherwise, no. Bob On 12 Mar 2009, at 01:36, Duane Johnson wrote: The MTL portion of that library depends on an external DevIL library ... is there a way to specify just the Obj portion which has no such dependency? Thanks, Duane On Mar 11, 2009, at 5:28 PM, Luke Palmer wrote: You might be interested in the obj library: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/obj Luke On Wed, Mar 11, 2009 at 5:23 PM, Duane Johnson duane.john...@gmail.com wrote: Hi, I am considering writing a VRML (.wrl) parser so that I can load points and normals for a game I'm making in Haskell. Is there something around that will already do the trick? Or perhaps another format is preferred and already supported? Thanks, Duane Johnson (canadaduane) http://blog.inquirylabs.com/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Status of Haskell under OsX
On 27 Feb 2009, at 08:17, Arne Dehli Halvorsen wrote: Manuel M T Chakravarty wrote: I'm planning to purchase a MacBookPro so I'm wondering how well Haskell is supported under this platform. At least two of the regular contributors to GHC work on Macs. That should ensure that Mac OS X is well supported. Installation is trivial with the Mac OS X installer package: http://haskell.org/ghc/download_ghc_6_10_1.html#macosxintel Good advice, but I've generally found the MacPorts version more consistantly built – plus, sudo port install ghc is nice and easy :) Hi, following on from this point: How does one get gtk2hs running on a mac? I have a MacBook Pro, and I've had ghc installed for some time now. (first in 6.8.2 (packaged), then 6.10.1 (packaged), then 6.8.2 via macports 1.6 then 6.10.1 via macports 1.7) I tried to install gtk2hs via macports, but it didn't work. (0.9.12? on 6.8.2, then on 6.10.1) Is there a recipe one could follow? Can I get the preconditions via macports, and then use cabal to install gtk2hs 0.10? For me, this worked: sudo port install ghc sudo port install gtk2 sudo port install cairomm curl http://downloads.sourceforge.net/gtk2hs/gtk2hs-0.10.0.tar.gz gtk2hs-0.10.0.tar.gz tar xvfz gtk2hs-0.10.0.tar.gz normal install stuff here. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Status of Haskell under OsX
On 27 Feb 2009, at 11:21, Arne Dehli Halvorsen wrote: Thomas Davie wrote: For me, this worked: sudo port install ghc sudo port install gtk2 sudo port install cairomm curl http://downloads.sourceforge.net/gtk2hs/gtk2hs-0.10.0.tar.gz gtk2hs-0.10.0.tar.gz tar xvfz gtk2hs-0.10.0.tar.gz normal install stuff here. It worked! I had to throw out gtk2, which was present in an incompatible version. Then I did make/make install, and tried compiling a few apps in the demo catalog. Most of them show up, but with these errors: Xlib: extension RANDR missing on display /tmp/launch-UoNAfJ/:0. Xlib: extension Generic Event Extension missing on display /tmp/ launch-UoNAfJ/:0. Yep, I see these errors too – even after sudo port install randr. Testing out the demos, it seems it can't find Graphics.UI.Gtk.Glade, Graphics.Rendering.OpenGL System.Gnome.GConf System.Gnome.VFS Media.Streaming.GStreamer Graphics.UI.Gtk.MozEmbed Graphics.UI.Gtk.SourceView Graphics.Rendering.Cairo.SVG Perhaps the demos are out of date? Graphics.Rendering.OpenGL is found in the OpenGL package, and System.Gnome looks unlikely to work on OS X. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] Re: speed: ghc vs gcc
On 20 Feb 2009, at 22:57, Bulat Ziganshin wrote: Hello Don, Saturday, February 21, 2009, 12:43:46 AM, you wrote: gcc -O3 -funroll-loops 0.318 ghc -funroll-loops -D64 0.088 So what did we learn here? nothing new: what you are not interested in real compilers comparison, preferring to demonstrate artificial results I'm not sure what you're getting at Bulat – it's been demonstrated that ghc is slower than gcc for most cases at the moment (many benchmarks will back this up), *however*, it's also easily verified that ghc has had significantly less effort directed at it than gcc and other imperative compilers, thus, there are many places it can improve greatly. In this case, you've pointed out a really great source of heavy optimisation. Thanks a lot :) Now perhaps it might be an idea to be constructive, rather than trying to stand like nelson going HA HA at the people with the inferior compiler. ;) Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: speed: ghc vs gcc
On 20 Feb 2009, at 23:33, Bulat Ziganshin wrote: Hello Achim, Saturday, February 21, 2009, 1:17:08 AM, you wrote: nothing new: what you are not interested in real compilers comparison, preferring to demonstrate artificial results ...that we have a path to get better results than gcc -O3 -funroll-loops, and it's within reach... we even can get there now, albeit not in the most hack-free way imaginable? well, can this be made for C++? yes. moreover, gcc does this trick *automatically*, while with ghc we need to write 50-line program using Template Haskell and then run it through gcc - and finally get exactly the same optimization we got automatic for C code so, again: this confirms that Don is always build artificial comparisons, optimizing Haskell code by hand and ignoring obvious ways to optimize Haskell code. unfortunately, this doesn't work in real live. and even worse - Don reports this as fair Haskell vs C++ comparison You need look no further than the debian language shootout that things really aren't as bad as you're making out – Haskell comes in in general less than 3x slower than gcc compiled C. Of note, of all the managed languages, this is about the fastest – none of the other languages that offer safety and garbage collection etc get as close as Haskell does. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[4]: [Haskell-cafe] Re: speed: ghc vs gcc
On 20 Feb 2009, at 23:41, Bulat Ziganshin wrote: Hello Thomas, Saturday, February 21, 2009, 1:19:47 AM, you wrote: I'm not sure what you're getting at Bulat √ it's been demonstrated that ghc is slower than gcc for most cases at the moment (many benchmarks will back this up), *however*, it's also easily verified that ghc has had significantly less effort directed at it than gcc and other imperative compilers, thus, there are many places it can improve greatly. of course. what fool will say that ghc cannot be optimized the same way as gcc? if we spent the same amount of time for improving ghc back-end as was spent for gcc (tens or hundreds man-years?), then *low-level* Haskell code will become as fast as C one, while remaining several times slower to write Considering Haskell compilers have lots more guarenteed conditions to go on (like referential transparency etc), I'd imagine actually that given the same amount of effort, they could compile Haskell code to *much* faster code than C. In this case, you've pointed out a really great source of heavy optimisation. Thanks a lot :) Now perhaps it might be an idea to be constructive, rather than trying to stand like nelson going HA HA at the people with the inferior compiler. ghc is superior compiler and it's my main instrument. but it can't make coffee and doesn't contain sophisticated code generator. it's why i dissuade from writing video codes in haskell and i don't like situation when someone too lazy to test speed yourself tell us tales and attack me when i say about real situation I'd hardly say that dons is too lazy – he has after all contributed rather large chunks of code to coming up with good examples, and optimising ghc. Secondly, I don't see him telling tales either – he's being very honest about the performance of Haskell here, and how it might be improved. Finally, I'd hardly call computing a constant in an arbitrarily complex way a real situation. I think someone needs to get off their high horse and reflect a little. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] speed: ghc vs gcc vs jhc
On 20 Feb 2009, at 23:44, Bulat Ziganshin wrote: Hello John, Saturday, February 21, 2009, 1:33:12 AM, you wrote: Don't forget jhc: i was pretty sure that jhc will be as fast as gcc :) unfortunately, jhc isn't our production compiler Why not? There's nothing stopping you from choosing any Haskell compiler you like. If jhc gives you the performance you need – use it. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] Re: speed: ghc vs gcc
On 20 Feb 2009, at 23:52, Bulat Ziganshin wrote: Hello Thomas, Saturday, February 21, 2009, 1:41:24 AM, you wrote: You need look no further than the debian language shootout that things really aren't as bad as you're making out √ Haskell comes in in general less than 3x slower than gcc compiled C. you should look inside these tests, as i done :) most of these tests depends on libraries speed. in one test, PHP is 1st. from 2 or 3 tests that depends on compiler speed, one was fooled by adding special function readInt to ghc libs and the rest are written in low-level haskell code - look the sources Shock news – benchmarks lead to compiler and library optimisations. News at 11! Of note, of all the managed languages, this is about the fastest √ none of the other languages that offer safety and garbage collection etc get as close as Haskell does. i'm sorry, but this test only shows amount of work spent to optimize these programs. results of some tests was made 10-100 times better, while improving real programs performance needs much more work :) I don't get your point at all. Are you implying that none of the code for any of the other languages up there is optimized in any way? Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[4]: [Haskell-cafe] speed: ghc vs gcc vs jhc
On 21 Feb 2009, at 00:01, Bulat Ziganshin wrote: Hello Thomas, Saturday, February 21, 2009, 1:52:27 AM, you wrote: i was pretty sure that jhc will be as fast as gcc :) unfortunately, jhc isn't our production compiler Why not? There's nothing stopping you from choosing any Haskell compiler you like. If jhc gives you the performance you need √ use it. i don't need jhc speed, i just warn people that believes Don tales. Oh, okay then... if you don't need the speed, stop complaining. Bye. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: speed: ghc vs gcc
On 21 Feb 2009, at 00:10, Ahn, Ki Yung wrote: Thomas Davie wrote: You need look no further than the debian language shootout that things really aren't as bad as you're making out – Haskell comes in in general less than 3x slower than gcc compiled C. Of note, of all the managed languages, this is about the fastest – none of the other languages that offer safety and garbage collection etc get as close as Haskell does. Bob OCaml and Clean seems to be pretty fast too. Very true :). As does C#, but using MS's compiler not mono. I think my conclusion from this thread is stop arguing, someone being wrong on the internet is not worth it, oh and cool, new possibly major optimisation for ghc. And finally, something I'd known all along – Haskell is plenty fast enough for writing real world programs, it's not as fast as C, but I don't care – I write so much better code so much faster in it that the tradeoff becomes worth it. Sorry for getting into the slagging match so much, and count me out of this one from now on. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Another point-free question (=, join, ap)
Hey, Thanks for all the suggestions. I was hoping that there was some uniform pattern that would extend to n arguments (rather than having to use liftM2, litM3, etc. or have different 'application' operators in between the different arguments); perhaps not. Oh well :) Sure you can! What you want is Control.Applicative, not Control.Monad. (*) is the generic application you're looking for: pure (+) * [1,2,3] * [4,5,6] [5,6,7,6,7,8,7,8,9] Note that pure f * y can be shortened to fmap though, which Control.Applicative defines a handy infix version of: (+) $ [1,2,3] * [4,5,6] [5,6,7,6,7,8,7,8,9] Hope that provides what you want Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] lazy evaluation is not complete
On 10 Feb 2009, at 07:57, Max Rabkin wrote: On Mon, Feb 9, 2009 at 10:50 PM, Iavor Diatchki iavor.diatc...@gmail.com wrote: I 0 * _ = I 0 I x * I y = I (x * y) Note that (*) is now non-commutative (w.r.t. _|_). Of course, that's what we need here, but it means that the obviously correct transformation of just to improve slightly: I 0 |* _ = I 0 I x |* I y = I (x * y) _ *| I 0 = I 0 I x *| I y = I (x * y) I x * | y = (I x |* I y) `unamb` (I x *| I y) Now it is commutative :) Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Switching from Mercurial to Darcs
On 6 Feb 2009, at 10:12, Paolo Losi wrote: Henning Thielemann wrote: 4) hg commit -m message this commits my changes locally. I always do this before pulling since then I'm sure my changes are saved in the case a merge goes wrong. In old darcs its precisely the other way round. Since it is so slow on merging ready patches, you better merge uncrecorded changes. IMO pulling merging before commit is a good practise also for hg: it avoids a (very often useless) merge commit in the history. I don't understand this view. Isn't the point of a commit that you flag working points. In each branch, before you merge (hopefully) you have a working repository, so flag it as such, and commit. When you merge, you may or may not have a working repository, fix it until it is, and merge. I would never do a merge without the two branches I was merging having a commit just before the merge. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell-beginners] Just how unsafe is unsafe
On 5 Feb 2009, at 22:11, Andrew Wagner wrote: So we all know the age-old rule of thumb, that unsafeXXX is simply evil and anybody that uses it should be shot (except when it's ok). I understand that unsafeXXX allows impurity, which defiles our ability to reason logically about haskell programs like we would like to. My question is, to what extent is this true? Suppose we had a module, UnsafeRandoms, which had a function that would allow you to generate a different random number every time you call it. The semantics are relatively well-defined, impurity is safely sectioned off in its own impure module, which is clearly labeled as such. How much damage does this do? The problem here is composability – you have no idea how far your non referentially transparent code has spread, because you can compose functions together willy nilly, meaning your random numbers can get pushed through all sorts of things, and cause odd behaviors (e.g. if a constant happens to get evaluated twice rather than once, and return different values each time). Can we push the lines elsewhere? Is sectioning unsafeXXX into Unsafe modules a useful idiom that we can use for other things as well? Well not useful modules, but useful types instead. The point of IO for example is to deliberately construct an environment in which you can't get one of your unsafe values out into the referentially transparent world – the IO type denotes the line on which one side contains unsafe values, and the other side does not. There are however some instances where unsafe functions *are* safe, Conal's unamb function for example, always returns the same value (as long as its precondition is met), even though it contains IO based code to race the values. There are also some instances where unsafe functions are safe purely through force of will. For example: type ResourcePath = FilePath loadImageResource :: ResourcePath - Image loadImageResource = unsafePerformIO . loadImage = readFile This is safe iff you treat the resource as a part of your program, just like your program's code, if it changes, the world falls down, but as long as it's still there and still the same, you're entirely safe. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell] Re: HOC
On 4 Feb 2009, at 13:33, Benjamin L.Russell wrote: On Sat, 31 Jan 2009 21:34:34 +0100, Thomas Davie tom.da...@gmail.com wrote: I noticed recently that HOC has moved over to google code, and seems a little more active than it was before. Is there a mailing list where I can talk to other users and get myself kick started, or is it a case of just using the standard Haskell ones? According to the HOC: Support site (see http://hoc.sourceforge.net/support.html), There are four mailing lists where you can contact the HOC developers and other users: hoc-announce Announcements of HOC releases and related tools (low-traffic) hoc-users General HOC discussions hoc-devel HOC developer implementation discussions hoc-cvs CVS commit log messages The above-mentioned links point to the following sites: hoc-announce Info Page https://lists.sourceforge.net/lists/listinfo/hoc-announce hoc-users Info Page https://lists.sourceforge.net/lists/listinfo/hoc-users hoc-devel Info Page https://lists.sourceforge.net/lists/listinfo/hoc-devel hoc-cvs Info Page https://lists.sourceforge.net/lists/listinfo/hoc-cvs You can subscribe to the above-mentioned mailing lists at the above-indicated sites. Ah, neat – I guess they have a lot of updating to do having moved away from Sourceforge. Bob___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell-cafe] Elegant powerful replacement for CSS
On 3 Feb 2009, at 20:39, Conal Elliott wrote: [Spin-off from the haskell-cafe discussion on functional/ denotational GUI toolkits] I've been wondering for a while now what a well-designed alternative to CSS could be, where well-designed would mean consistent, composable, orthogonal, functional, based on an elegantly compelling semantic model (denotational). Can I start by replacing html please :) I'd like to separate the document in roughly the same way as html and css attempt to, meaning I'd like a document description language, and a styling description language. I can imagine the styling language having the meaning function from documents onto geometry, but the document description language is harder. Ideally what I'd like to do with it is to make it describe *only* the logical structure of the information being conveyed – sections, text, figures, tables (no, not for layout, for tabular data) etc. I can't though come up with a nice simple solution to that that (a) restricts users to really describing documents, not layout (b) still allows for composability in any sensible kind of way. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Purely funcional LU decomposition
On 3 Feb 2009, at 22:37, Rafael Gustavo da Cunha Pereira Pinto wrote: Hello folks After a discussion on whether is possible to compile hmatrix in Windows, I decided to go crazy and do a LU decomposition entirely in Haskell... At first I thought it would be necessary to use a mutable or monadic version of Array, but then I figured out it is a purely interactive process. I am releasing this code fragment as LGPL. Shinyness indeed – a quick note though, as ghc doesn't support dynamic linking of Haskell code, the above is equivalent to the GPL. Would be lovely if you packaged this up and stuck it on Hackage :) Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Why binding to existing widget toolkits doesn't make any sense
On 3 Feb 2009, at 08:12, Achim Schneider wrote: John A. De Goes j...@n-brain.net wrote: Perhaps I should have been more precise: How do you define layout and interaction semantics in such a way that the former has a *necessarily* direct, enormous impact on the latter? HTML/CSS is a perfect example of how one can decouple a model of content from the presentation of that content. The developer writes the content model and the controller, while UX guys or designers get to decide how it looks. HTML, or rather XML, would be layout to me. GUI's usually don't serve static content, and allowing a CSS layer to position eg. a filter GUI that supports chaining up any amount of filters by slicing them apart and positioning them on top of each other (maybe because someone didn't notice that you can use more than one filter) wrecks havoc on both usability and the semantics. Wrecks havoc on the semantics in the sense of that if a thing is editable, the semantics should guarantee that it is, indeed, editable. Likewise, if something is marked as visible (and such things are explicit in the model, not defined by an outer layer), the semantics should guarantee that it is visible. I mostly don't get how a topic discussing how to do GUIs in a beautiful, consistent, composable, orthogonal, functional way got onto the topic of oh hay, you could do it with html and css. Sure, those two may be declarative languages, but that doesn't make either of them fill the list of features required above! Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX
This is caused by OS X's libiconv being entirely CPP macros, the FFI has nothing to get hold of. IIRC there's a ghc bug report open for it. Bob On 1 Feb 2009, at 18:57, Antoine Latter wrote: Funny story, If I do the following three things, I get errors on my Intel Mac OS 10.5: * Build an executable with Cabal * Have the executable have a build-dep of pcre-light in the .cabal * use haskeline in the executable itself I get crazy linker errors relating to haskeline and libiconv: Shell output: $ cabal clean cabal configure cabal build cleaning... Configuring test-0.0.0... Preprocessing executables for test-0.0.0... Building test-0.0.0... [1 of 1] Compiling Main ( test.hs, dist/build/test/test- tmp/Main.o ) Linking dist/build/test/test ... Undefined symbols: _iconv_open, referenced from: _s9Qa_info in libHShaskeline-0.6.0.1.a(IConv.o) _iconv_close, referenced from: _iconv_close$non_lazy_ptr in libHShaskeline-0.6.0.1.a(IConv.o) _iconv, referenced from: _sa0K_info in libHShaskeline-0.6.0.1.a(IConv.o) ld: symbol(s) not found collect2: ld returned 1 exit status But all three above conditions need to be true - if I build using 'ghc --make' everything works great, even if the executable imports pcre-light and haskeline. If I have build-deps on haskeline and pcre-light, but don't actually import haskeline, everything also works great. Here are the files I've used: test.hs: import System.Console.Haskeline main :: IO () main = print Hello! test.cabal Name:test version: 0.0.0 cabal-version: = 1.2 build-type: Simple Executable test main-is: test.hs build-depends: base, haskeline, pcre-light=0.3 Is there some way I need to be building haskeline on OS X to make this work? Thanks, Antoine more details: $ cabal --version cabal-install version 0.6.0 using version 1.6.0.1 of the Cabal library $ ghc --version The Glorious Glasgow Haskell Compilation System, version 6.10.0.20081007 links: pcre-light: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pcre-light haskeline: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/haskeline ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskeline, pcre-light, iconv and Cabal on OSX
On 1 Feb 2009, at 19:43, Antoine Latter wrote: On Sun, Feb 1, 2009 at 12:04 PM, Antoine Latter aslat...@gmail.com wrote: On Sun, Feb 1, 2009 at 12:01 PM, Thomas Davie tom.da...@gmail.com wrote: This is caused by OS X's libiconv being entirely CPP macros, the FFI has nothing to get hold of. IIRC there's a ghc bug report open for it. Bob So why does it sometimes work? I can write and compile executables using haskeline, both with 'ghc --make' and 'cabal configure cabal build'. This sounds like something I can patch haskeline to account for, then? After a bit of digging, I saw this snippet in the .cabal file for the iconv package on hackage: -- We need to compile via C because on some platforms (notably darwin) -- iconv is a macro rather than real C function. doh! ghc-options: -fvia-C -Wall But it looks like the 'iconv' package is broken in the exact same way for me - I get the same crazy linker errors. Yep, darwin is OS X :) Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell] HOC
I noticed recently that HOC has moved over to google code, and seems a little more active than it was before. Is there a mailing list where I can talk to other users and get myself kick started, or is it a case of just using the standard Haskell ones? Bob ___ Haskell mailing list Haskell@haskell.org http://www.haskell.org/mailman/listinfo/haskell
Re: [Haskell-cafe] Re: Laws and partial values
On 25 Jan 2009, at 23:36, Daniel Fischer wrote: Why is this obvious - I would argue that it's obvious that bottom *is* () - the data type definition says there's only one value in the type. Any value that I haven't defined yet must be in the set, and it's a single element set, so it *must* be (). It's obvious because () is a defined value, while bottom is not - per definitionem. The matter is that besides the elements declared in the datatype definition, every Haskell type also contains bottom. - I thought that under discussion were the actual Haskell semantics - I'm not so sure about that anymore. If Thomas Davie (Bob) was here discussing an alternative theory in which () is unlifted, the sorry, I completely misunderstood. My argument is that in Haskell as it is, as far as I know, _|_ *is* defined to denote a nonterminating computation, while on the other hand () is an expression in normal form, hence denotes a terminating computation, therefore it is obvious that the two are not the same, as stated by Jake. Of course I may be wrong in my premise, then, if one really cared about obviousness, one would have to put forward a different argument. If you go look through the message history some more, you'll see a couple of emails which convinced me that that indeed was the semantics in haskell, and a follow up saying okay, lets discuss a hypothetical now, because this looks fun and interesting. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: filestore 0.1
On 26 Jan 2009, at 06:17, carmen wrote: back to the original topic of the thread.. cool project, id be interested ina pure-FS backend as well, Indeed, very cool! Can I make another feature request – generalize how diffs are created, so that I could in theory parse the file contents, and then diff the CSTs rather than diffing text. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Laws and partial values
On 25 Jan 2009, at 10:08, Daniel Fischer wrote: Am Sonntag, 25. Januar 2009 00:55 schrieb Conal Elliott: It's obvious because () is a defined value, while bottom is not - per definitionem. I wonder if this argument is circular. I'm not aware of defined and not defined as more than informal terms. They are informal. I could've written one is a terminating computation while the other is not. Is that a problem when trying to find the least defined element of a set of terminating computations? Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Laws and partial values (was: [Haskell-cafe] mapM_ - Monoid.Monad.map)
On 24 Jan 2009, at 10:40, Ryan Ingram wrote: On Fri, Jan 23, 2009 at 10:49 PM, Thomas Davie tom.da...@gmail.com wrote: Isn't the point of bottom that it's the least defined value. Someone above made the assertion that for left identity to hold, _|_ `mappend` () must be _|_. But, as there is only one value in the Unit type, all values we have no information about must surely be that value, so this is akin to saying () `mappend` () must be (), which our definition gives us. But _|_ is not (). For example: data Nat = Z | S Finite proveFinite :: Nat - () proveFinite Z = () proveFinite (S x) = proveFinite x infinity :: Nat infinity = S infinity somecode x = case proveFinite x of () - something_that_might_rely_on_x_being_finite problem = somecode infinity If you can pretend that the only value of () is (), and ignore _|_, you can break invariants. This becomes even more tricky when you have a single-constructor datatype which holds data relevant to the typechecker; ignoring _|_ in this case could lead to unsound code. Your proveFinite function has the wrong type – it should be Nat - Bool, not Nat - () – after all, you want to be able to distinguish between proving it finite, and proving it infinite, don't you (even if in reality, you'll never return False). Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Laws and partial values
On 24 Jan 2009, at 20:28, Jake McArthur wrote: Thomas Davie wrote: But, as there is only one value in the Unit type, all values we have no information about must surely be that value The flaw in your logic is your assumption that the Unit type has only one value. Consider bottom :: () bottom = undefined Oviously, bottom is not () Why is this obvious – I would argue that it's obvious that bottom *is* () – the data type definition says there's only one value in the type. Any value that I haven't defined yet must be in the set, and it's a single element set, so it *must* be (). , but its type, nonetheless, is Unit. Unit actually has both () and _|_. More generally, _|_ inhabits every Haskell type, even types with no constructors (which itself requires a GHC extension, of course): Does it? Do you have a document that defines Haskell types that way? data Empty bottom' :: Empty bottom' = undefined If you only ever use total functions then you can get away with not accounting for _|_. Perhaps ironically a function that doesn't account for _|_ may be viewed philosophically as a partial function since its contract doesn't accommodate all possible values. Now that one is interesting, I would argue that this is a potential flaw in the type extension – values in the set defined here do not exist, that's what the data definition says. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Laws and partial values
On 24 Jan 2009, at 21:31, Dan Doel wrote: On Saturday 24 January 2009 3:12:30 pm Thomas Davie wrote: On 24 Jan 2009, at 20:28, Jake McArthur wrote: Thomas Davie wrote: But, as there is only one value in the Unit type, all values we have no information about must surely be that value The flaw in your logic is your assumption that the Unit type has only one value. Consider bottom :: () bottom = undefined Oviously, bottom is not () Why is this obvious – I would argue that it's obvious that bottom *is* () – the data type definition says there's only one value in the type. Any value that I haven't defined yet must be in the set, and it's a single element set, so it *must* be (). For integers, is _|_ equal to 0? 1? 2? ... Hypothetically (as it's already been pointed out that this is not the case in Haskell), _|_ in the integers would not be known, until it became more defined. I'm coming at this from the point of view that bottom would contain all the information we could possibly know about a value while still being the least value in the set. In such a scheme, bottom for Unit would be (), as we always know that the value in that type is (); bottom for pairs would be (_|_, _|_), as all pairs look like that (this incidentally would allow fmap and second to be equal on pairs); bottom for integers would contain no information, etc. , but its type, nonetheless, is Unit. Unit actually has both () and _|_. More generally, _|_ inhabits every Haskell type, even types with no constructors (which itself requires a GHC extension, of course): Does it? Do you have a document that defines Haskell types that way? From the report: Since Haskell is a non-strict language, all Haskell types include _|_. http://www.haskell.org/onlinereport/exps.html#basic-errors Although some languages probably have the semantics you're thinking of (Agda, for instance, although you can write non-terminating computations and it will merely flag it in red, currently), Haskell does not. Yep, indeed. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Laws and partial values
On 24 Jan 2009, at 22:19, Henning Thielemann wrote: On Sat, 24 Jan 2009, Thomas Davie wrote: On 24 Jan 2009, at 21:31, Dan Doel wrote: For integers, is _|_ equal to 0? 1? 2? ... Hypothetically (as it's already been pointed out that this is not the case in Haskell), _|_ in the integers would not be known, until it became more defined. I'm coming at this from the point of view that bottom would contain all the information we could possibly know about a value while still being the least value in the set. In such a scheme, bottom for Unit would be (), as we always know that the value in that type is (); bottom for pairs would be (_|_, _|_), as all pairs look like that (this incidentally would allow fmap and second to be equal on pairs); bottom for integers would contain no information, etc. Zero- and one-constructor data types would then significantly differ from two- and more-constructor data types, wouldn't they? Yes, they would, but not in any way that's defined, or written in, the fact that they have a nice property of being able to tell something about what bottom looks like is rather nice actually. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Laws and partial values
On 24 Jan 2009, at 22:47, Lennart Augustsson wrote: You can dream up any semantics you like about bottom, like it has to be () for the unit type. But it's simply not true. I suggest you do some cursory study of denotational semantics and domain theory. Ordinary programming languages include non-termination, so that has to be captured somehow in the semantics. And that's what bottom does. I'm not sure why you're saying that this semantics does not capture non-termination – the only change is that computations resulting in the unit type *can't* non terminate, because we can always optimize them down to (). Of course, if you want to be able to deal with non- termination, one could use the Maybe () type! Some chatting with Conal about the semantics I'm talking about revealed some nice properties, so I'm gonna run away and think about this, and then blog about it. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Laws and partial values
On 25 Jan 2009, at 00:01, Benja Fallenstein wrote: Hi Lennart, On Sat, Jan 24, 2009 at 10:47 PM, Lennart Augustsson lenn...@augustsson.net wrote: You can dream up any semantics you like about bottom, like it has to be () for the unit type. But it's simply not true. I suggest you do some cursory study of denotational semantics and domain theory. Umh. It's certainly not Haskell, but as far as I can tell, the semantics Bob likes are perfectly fine domain theory. (_|_, _|_) = _|_ is the *simpler* definition of domain-theoretical product (and inl(_|_) = inr(_|_) = _|_ is the simpler definition of domain-theoretical sum), and the unit of this product (and sum) is indeed the type containing only bottom. Lifting everything, as Haskell does, is extra. I suppose it's unusual that Bob wants to lift sums but not products, but there's nothing theoretically fishy about it that I can see. Yep, this is part of the insight that Conal helped me have – I sense tomorrow is gonna be spent with much blog writing. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] mapM_ - Monoid.Monad.map
On 23 Jan 2009, at 21:50, Henning Thielemann wrote: I always considered the monad functions with names ending on '_' a concession to the IO monad. Would you need them for any other monad than IO? For self-written monads you would certainly use a monoid instead of monadic action, all returning (), but IO is a monad. (You could however wrap (newtype Output = Output (IO ())) and define a Monoid instance on Output.) However our recent Monoid discussion made me think about mapM_, sequence_, and friends. I think they could be useful for many monads if they would have the type: mapM_ :: (Monoid b) = (a - m b) - [a] - m b I expect that the Monoid instance of () would yield the same efficiency as todays mapM_ and it is also safer since it connects the monadic result types of the atomic and the sequenced actions. There was a recent discussion on the topic: http://neilmitchell.blogspot.com/2008/12/mapm-mapm-and-monadic-statements.html Of note btw, these don't need Monad at all... sequence :: Applicative f = [f a] - f [a] sequence = foldr (liftA2 (:)) (pure []) mapA :: (Traversable t, Applicative f) = (a - f b) - t a - f (t b) mapA = (fmap . fmap) sequence fmap Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Laws and partial values (was: [Haskell-cafe] mapM_ - Monoid.Monad.map)
On 24 Jan 2009, at 02:33, Luke Palmer wrote: On Fri, Jan 23, 2009 at 6:10 PM, rocon...@theorem.ca wrote: On Fri, 23 Jan 2009, Derek Elkins wrote: mempty `mappend` undefined = undefined (left identity monoid law) The above definition doesn't meet this, similarly for the right identity monoid law. That only leaves one definition, () `mappend` () = () which does indeed satisfy the monoid laws. So the answer to the question is Yes. Another example of making things as lazy as possible going astray. I'd like to argue that laws, such as monoid laws, do not apply to partial values. But I haven't thought my position through yet. Please try to change your mind. I'd actually argue that this is just the wrong way of formulating my statement. Please correct my possibly ill informed maths, if Im doin it rong though... Isn't the point of bottom that it's the least defined value. Someone above made the assertion that for left identity to hold, _|_ `mappend` () must be _|_. But, as there is only one value in the Unit type, all values we have no information about must surely be that value, so this is akin to saying () `mappend` () must be (), which our definition gives us. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: How to make code least strict?
Further to all the playing with unamb to get some very cool behaviors, you might want to look at Olaf Chitil's paper here: http://www.cs.kent.ac.uk/pubs/2006/2477/index.html It outlines a tool for checking if your programs are as non-strict as they can be. Bob On 21 Jan 2009, at 02:08, Conal Elliott wrote: Lovely reformulation, Ryan! I think lub [4] is sufficient typeclass hackery for unambPatterns: unambPatterns == lubs == foldr lub undefined [4] http://conal.net/blog/posts/merging-partial-values I think performance is okay now, if you have very recent versions of unamb *and* GHC head (containing some concurrency bug fixes). See http://haskell.org/haskellwiki/Unamb . The GHC fix will take a while to get into common use. My definitions of zip via (a) 'assuming' 'unamb' and (b) parAnnihilator are badly broken. For one, the unamb arguments are incompatible (since i didn't check for both non-null args in the third case). Also, the types aren't right for parAnnihilator. I tried out this idea, and it seems to work out very nicely. See the brand-new blog post http://conal.net/blog/posts/lazier-function-definitions-by-merging-partial-values/ . Blog comments, please! - Conal On Mon, Jan 19, 2009 at 3:01 PM, Ryan Ingram ryani.s...@gmail.com wrote: Actually, I see a nice pattern here for unamb + pattern matching: zip xs ys = foldr unamb undefined [p1 xs ys, p2 xs ys, p3 xs ys] where p1 [] _ = [] p2 _ [] = [] p3 (x:xs) (y:ys) = (x,y) : zip xs ys Basically, split each pattern out into a separate function (which by definition is _|_ if there is no match), then use unamb to combine them. The invariant you need to maintain is that potentially overlapping pattern matches (p1 and p2, here) must return the same result. With a little typeclass hackery you could turn this into zip = unambPatterns [p1,p2,p3] where {- p1, p2, p3 as above -} Sadly, I believe the performance of parallel-or-style operations is pretty hideous right now. Conal? -- ryan On Mon, Jan 19, 2009 at 2:42 PM, Conal Elliott co...@conal.net wrote: I second Ryan's recommendation of using unamb [1,2,3] to give you unbiased (symmetric) laziness. The zip definition could also be written as zip xs@(x:xs') ys@(y:ys') = assuming (xs == []) [] `unamb` assuming (ys == []) [] `unamb` (x,y) : zip xs' ys' The 'assuming' function yields a value if a condition is true and otherwise is bottom: assuming :: Bool - a - a assuming True a = a assuming False _ = undefined This zip definition is a special case of the annihilator pattern, so zip = parAnnihilator (\ (x:xs') (y:ys') - (x,y) : zip xs' ys') [] where 'parAnnihilator' is defined in Data.Unamb (along with other goodies) as follows: parAnnihilator :: Eq a = (a - a - a) - a - (a - a - a) parAnnihilator op ann x y = assuming (x == ann) ann `unamb` assuming (y == ann) ann `unamb` (x `op` y) [1] http://haskell.org/haskellwiki/Unamb [2] http://hackage.haskell.org/packages/archive/unamb/latest/doc/html/Data-Unamb.html [3] http://conal.net/blog/tag/unamb/ - conal On Mon, Jan 19, 2009 at 12:27 PM, Ryan Ingram ryani.s...@gmail.com wrote: On Mon, Jan 19, 2009 at 9:10 AM, ChrisK hask...@list.mightyreason.com wrote: Consider that the order of pattern matching can matter as well, the simplest common case being zip: zip xs [] = [] zip [] ys = [] zip (x:xs) (y:ys) = (x,y) : zip xs ys If you are obsessive about least-strictness and performance isn't a giant concern, this seems like a perfect use for Conal's unamb[1] operator. zipR xs [] = [] zipR [] ys = [] zipR (x:xs) (y:ys) = (x,y) : zip xs ys zipL [] ys = [] zipL xs [] = [] zipL (x:xs) (y:ys) = (x,y) : zip xs ys zip xs ys = unamb (zipL xs ys) (zipR xs ys) This runs both zipL and zipR in parallel until one of them gives a result; if neither of them is _|_ they are guaranteed to be identical, so we can unambiguously choose whichever one gives a result first. -- ryan [1] http://conal.net/blog/posts/functional-concurrency-with-unambiguous-choice/ ___ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to simplify this code?
add2 :: JSON a = MyData - String - a - MyData add2 m k v = fromJust $ (\js - m { json = js }) `liftM` (showJSON `liftM` (toJSObject `liftM` (((k, showJSON v):) `liftM` (fromJSObject `liftM` (jsObj $ json m) setJSON m js = m {json = js} add2 m k v = fromJust $ setJSON m $ showJSON $ toJSObjct $ ((k, showJSON v):) $ fromJSObject $ (jsObj . json $ m) now let's push all the fmaps together: add2 m k v = fromJust . fmap (setJSON m . showJSON . toJSObject . ((k, showJSON v):) . fromJSObject) . jsObj . json $ m much better :) Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: How to simplify this code?
On 16 Jan 2009, at 02:30, eyal.lo...@gmail.com wrote: Very nice series of refactorings! I'd like to add that it might be a better argument order to replace: JSON a = MyData - String - a - MyData with: JSON a = String - a - MyData - MyData Just so you can get a (MyData - MyData) transformer, which is often useful. Following up on this idea: add m k v = fromJust . fmap (setJSON m . showJSON . toJSObject . ((k, showJSON v):) . fromJSObject) . jsObj . json $ m can now become: add k v = fromJust . fmap (setJSON m . showJSON . toJSObject . ((k, showJSON v):) . fromJSObject) . jsObj . json if you switch the type around like that, and then it truely does become obvious that this is a (MyData - MyData) transformer. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt
On 15 Jan 2009, at 16:34, John Goerzen wrote: Hi folks, Don Stewart noticed this blog post on Haskell by Brian Hurt, an OCaml hacker: http://enfranchisedmind.com/blog/2009/01/15/random-thoughts-on- haskell/ It's a great post, and I encourage people to read it. I'd like to highlight one particular paragraph: [snip] Sorry, I'm not going to refer to that paragraph, instead, I'm going to point out how depressing it is, that the message we're getting across to new haskellers is that Monads, and variations on monads and extensions to monads and operations on monads are the primary way Haskell combines code-. We have loads of beautiful ways of combining code (not least ofc, simple application), why is it than Monad is getting singled out as the one that we must use for everything? My personal suspicion on this one is that Monad is the one that makes concessions to imperative programmers, by on of its main combinators (=) having the type (=) :: (Monad m) = m a - (a - m b) - m b, and not the much nicer type (=) :: (Monad m) = (a - m b) - (m a - m b). Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to check object's identity?
On 4 Jan 2009, at 18:08, Aaron Tomb wrote: On Jan 3, 2009, at 7:28 AM, Xie Hanjian wrote: Hi, I tried this in ghci: Prelude 1:2:[] == 1:2:[] True Does this mean (:) return the same object on same input, or (==) is not for identity checking? If the later is true, how can I check two object is the *same* object? As others have explained, the == operator doesn't tell you whether two values are actually stored at the same location in memory. If you really need to do this, however, GHC does provide a primitive for comparing the addresses of two arbitrary values: reallyUnsafePtrEquality# :: a - a - Int# http://haskell.org/ghc/docs/latest/html/libraries/ghc-prim/GHC-Prim.html#22 Take note of the reallyUnsafe prefix, though. :-) It's not something most programs should ever need to deal with. Of note, you probably don't need to do this. It's usually safer to associate data with a key, using Data.Map, or just pairing objects with a unique id. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell-beginners] pattern matching on date type
On 1 Jan 2009, at 09:36, Max.cs wrote: thanks! suppose we have data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving Show and how I could define a function foo :: a - Tree a that foo a = Leaf a where a is not a type of Tree foo b = b where b is one of the type of Tree (Leaf or Branch) ? The following code seems not working.. foo (Leaf a) = a foo a = Leaf a saying 'Couldn't match expected type `a' against inferred type `Btree a' Hi again Max, I'm assuming this is continuing from the concatT example, and that you're struggling with first function you must pass to foldTree. Remember the type of the function – it's not a - Tree a, but Tree a - Tree a, because your leaves in the parent tree all contain trees to glue on at that point. So, the function you want, is the function which looks at the parameter it's given, goes oh, that's interesting, does nothing to it, and hands it back to replace the Leaf. I recommend searching hoogle for functions of type a - a, the function you're looking for is built in. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: [Haskell-beginners] about the concatenation on a tree
On 31 Dec 2008, at 16:02, Max cs wrote: hi all, not sure if there is someone still working during holiday like me : ) I got a little problem in implementing some operations on tree. suppose we have a tree date type defined: data Tree a = Leaf a | Branch (Tree a) (Tree a) I want to do a concatenation on these tree just like the concat on list. Anyone has idea on it? or there are some existing implementation? Thank you and Happy New Year! How would you like to concatenate them? Concatonation on lists is easy because there's only one end point to attach the next list to, on a tree though, there are many leaves to attach things to. Here's a few examples though: Attaching to the right most point on the tree (tree data structure modified to store data in branches not leaves here) data Tree a = Leaf | Branch (Tree a) a (Tree a) concatT :: [Tree a] - Tree a concatT = foldr1 appendT appendT :: Tree a - Tree a - Tree a appendT Leaf t = t appendT (Branch l x r) t = Branch l x (appendT r t) Attaching to *all* the leaves on the tree (same modification to the data structure) concatT :: [Tree a] - Tree a concatT = foldr1 appendT appendT :: Tree a - Tree a - Tree a appendT Leaf t = t appendT (Branch l x r) t = Branch (appendT l t) x (appendT r t) merging a list of trees maintaining them as ordered binary trees concatT :: Ord a = [Tree a] - Tree a concatT = foldr1 unionT unionT :: Ord a = Tree a - Tree a - Tree a unionT t = foldrT insertT t foldrT :: (a - b - b) - b - Tree a - b foldrT f z Leaf = z foldrT f z (Branch l x r) = f x (foldrT f (foldrT f z r) l) insertT :: Ord a = a - Tree a - Tree a insertT x Leaf = Branch Leaf x Leaf insertT x (Branch l y r) | x = y = Branch (insertT x l) y r | otherwise = Branch l y (insertT x r) Hope that helps. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] about the concatenation on a tree
On 31 Dec 2008, at 21:18, Henk-Jan van Tuyl wrote: On Wed, 31 Dec 2008 17:19:09 +0100, Max cs max.cs. 2...@googlemail.com wrote: Hi Henk-Jan van Tuyl, Thank you very much for your reply! I think the concatenation should be different to thhe treeConcat :: Tree a - Tree a - Tree a the above is a combination of two trees instead of a concatenation, so I think the type of treeConcat should be: treeConcat :: Tree (Tree a) - Tree a instead. How do you think? : ) I tried to implement it .. but it seems confusing.. to me Thanks Max Hello Max, The function treeConcat :: Tree (Tree a) - Tree a cannot be created, as it has an infinite type; It does? How did he type it then? And yes, it can be created concatT :: Tree (Tree a) - Tree a concatT (Leaf t) = t concatT (Branch l r) = Branch (concatT l) (concatT r) It's also known as join on trees (as I explained a bit more in my response on haskell-beginners). Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] instance Enum [Char] where ...
Am 30.12.2008 um 04:25 schrieb JustinGoguen: I am having difficulty making [Char] an instance of Enum. fromEnum is easy enough: map fromEnum to each char in the string and take the sum. However, toEnum has no way of knowing what the original string was. The problem you're having is that your implementation does not correctly enumerate lists of characters – in order to do so correctly, you must not create the clashes you get with (for example ab and ba). I'd suggest rethinking how you would enumerate such strings. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Time for a new logo?
On 24 Dec 2008, at 08:13, Colin Paul Adams wrote: There are a lot of nice designs on the new_logo_ideas page. My favourite by far is Conal's. One thing I noticed - everyone seems to include lower-case lambda in the design, but no-one seems to have replaced the terminal double ell in Haskell with a double lambda. I already did that one -- it's up there. inline: Haskell.png Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Yampa vs. Reactive
On 21 Dec 2008, at 13:10, Henrik Nilsson wrote: Hi Tom, In reactive, one doesn't. All behaviors and events have the same absolute 0 value for time. Right. I believe the possibility of starting behaviors later is quite important. And from what Conal wrote in a related mail, I take it that this is recognized, and that this capability is something that is being considered for reactive? Yep, it is indeed. Thanks for this series of emails by the way. It's helped clarify in my head exactly what problems Yampa solved, and exactly which of them Reactive does or doesn't solve. Thanks Tom Davie ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Defining a containing function on polymorphic list
On 22 Dec 2008, at 15:18, Andrew Wagner wrote: Yes, of course, sorry for the typo. On Mon, Dec 22, 2008 at 9:17 AM, Denis Bueno dbu...@gmail.com wrote: 2008/12/22 Andrew Wagner wagner.and...@gmail.com: The problem here is even slightly deeper than you might realize. For example, what if you have a list of functions. How do you compare two functions to each other to see if they're equal? There is no good way really to do it! So, not only is == not completely polymorphic, but it CAN'T be. There is a nice solution for this, however, and it's very simple: contain :: Eq a - [a] - Bool Please note that the syntax here should be: contain :: Eq a = a - [a] - Bool Denis Of note, unless this is an exercise, such a function already exists -- it's called elem. How do you find such a function? You search on haskell.org/hoogle. http://haskell.org/hoogle/?hoogle=Eq+a+%3D%3E+a+-%3E+%5Ba%5D+-%3E+Bool Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell] ANN: Hoogle with more libraries
Hi Neil, This is a great addition! There's several packages up there that I want to search. A couple of small bug reports though: 1. Searching using a package name that isn't all lower case results in nothing (e.g. (a - b) - f a - f b +InfixApplicative gives no results, while (a - b) - f a - f b +infixapplicative gives 2). 2. Searching with the +package syntax seems to remove all other possible results. Perhaps it would be nice to show results from the default library set below. And one feature request: If you get the package name wrong (i.e. specify a package that hoogle can't see), it would be nice for it to report something like google does -- maybe you meant xyz. Bob On 21 Dec 2008, at 11:55, Neil Mitchell wrote: Hi, I am pleased to annouce that the Hoogle on http://haskell.org/hoogle will now search lots of the libraries present on hackage. For example, to search for the parse function in tagsoup, try: parse +tagsoup (http://haskell.org/hoogle/?hoogle=parse+%2Btagsoup) By default Hoogle will still search the default libraries it always has (cabal, base etc.) but now, by doing +packagename, you can search for any of the libraries listed at the bottom of this message. The work of generating the necessary search databases was done by Gwern Branwen, for which I am very grateful. This is early work, and there are certain to be bugs. For example, when searching a package String /= [Char], documentation might not always be present, constructors containing functions might be a bit iffy, not all packages are present, no executables are present, if you mistype the +packagename it won't say anything. Hopefully these will all be addressed over time. QUESTIONS At the moment Hoogle searches will default to the small set of packages shipped with GHC. What should Hoogle default to? What compound sets of packages should be defined? i.e. there will probably be hlp which corresponds to all the Haskell Library Platform packages. This is a community tool, and these sorts of decisions are ones for the whole community. Otherwise by default you are likely to end up searching uniplate, tagsoup, safe, filepath, proposition, homeomorphic and binarydefer :-) HELP ME I can't currently recompile Hoogle, which is a bit of an issue! The haskell.org server gives the error message timer_create: Operation not supported on any relatively recent binaries created by GHC. I need to be able to compile the hoogle executable and upload it so it works. I used to do this using a copy of GHC 6.6.1 installed at York, but that was recently removed (6.6.0 has a bug which crashes hoogle, 6.8+ makes use of timer_create). How easy is it to make the haskell.org server support timer_create? How easy is it to generate binaries that don't depend on it? Anyone any good advice? Please direct any follow-ups to haskell-cafe@ Thanks Neil The libraries currently supported by hoogle are: adaptive aern-net alsa-midi ansi-terminal ansi-wl-pprint anydbm applescript arff array arrows astar attoparsec autoproc avltree base64-string benchpress bencode berkeleydbxml binary-search binary bio bitset bitsyntax bktrees bloomfilter botpp brainfuck bytestring-csv bytestring-mmap bytestring bytestringparser bytestringreadp c-io cabal cabalrpmdeps carray cc-delcont cgi-undecidable cgi change-monger chasingbottoms checked christmastree chunks classify clevercss cmath codec-compression-lzf codec-libevent colock colour compact-map condorcet containers control-monad-free control-monad-omega control-timeout cordering coreerlang cpphs csv ctemplate curl data-accessor-template data-accessor data-default data-ivar data-memocombinators data-quotientref dataenc debian-binary debugtracehelpers decimal delicious delimited-text diff dlist dotgen download-curl download drhylo dsp edisonapi edisoncore edit-distance editline eeconfig encode event-handlers event-list event-monad explicit-exception extensible-exceptions external-sort fec feed ffeed fgl fieldtrip filepath finance-quote-yahoo fingertree finitemap fixpoint flickr flock formlets freesound ftgl fungen funsat garsia-wachs geoip ghc-paths glfw-ogl glfw glob glut gmap gnuplot googlechart graphics-drawingcombinators graphicsformats graphscc graphviz gravatar grotetrap hacanon-light hake happs-ixset happs-util harm harp haskeline haskeline.txt haskell-lexer haskell-src-exts haskell-src haskell98 haxml haxml.txt hcl hcodecs heap hedi hexdump hexpat hfov hgalib hgdbmi hgeometric highlighting-kate hinotify hinstaller hjavascript hjs hjscript hledger hlongurl hmarkup hmeap hmidi hmm homeomorphic hopenssl hosc hosc.txt hpc hpdf hsasa hsc3-dot hsc3-unsafe hsc3 hsc3.txt hscolour hsconfigure hsemail hsh hshhelpers hslogger hslua hsoundfile hsparrot hspr-sh hstats hstringtemplate hstringtemplatehelpers hsx-xhtml html http-shed http http.txt httpd-shed hunit hxt-filter hxt i18n icalendar iff ifs imlib indentparser infinite-search infix infixapplicative interlude io-reactive ipprint irc join json lazyarray
Re: [Haskell-cafe] Re: [reactive] problem with unamb -- doesn't kill enough threads
On 20 Dec 2008, at 12:00, Peter Verswyvelen wrote: I see. Of course, how silly of me, killThread is asynchronous, it only waits until the exception is raised in the receiving thread, but it does not wait until the thread is really killed. The documentation does not seem to mention this explicitly. Now, what would be the clean way to make sure all threads are indeed killed before the process quits? I tried to add another MVar that gets set after the thread handles uncatched exceptions (so something like bracket (forkIO a) (putMVar quit ()) return) and the code that calls killThread then does takeMVar quit, but this did not solve the problem. I'm not sure I understand what the problem is – if the process has died, why do we want to kill threads? Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Yampa vs. Reactive
Hi Henrik, On 19 Dec 2008, at 02:05, Henrik Nilsson wrote: Hi Tom, I'm not sure why mapping the function is not satisfactory -- It would create a new Behavior, who's internals contain only the two elements from the list -- that would expose to the garbage collector that the second element has no reference from this behavior any more, and thus the whole behavior could be collected. We must be talking at cross purposes here: there is no way that deleting the *output* from one of the behaviours from a list of outputs would cause the underlying behavior whose output no longer is observed to be garbage collected. After all, that list of three numbers is just a normal value: why should removing one of its elements, so to speak, affect the producer of the list? But if we have a list of running behaviors or signals, and that list is changed, then yes, of course we get the desired behavior (this is what Yampa does). So maybe that's what you mean? I'm afraid not, rereading what I said, I really didn't explain what I was talking about well. A Behavior in reactive is not just a continuous function of time. It is a series of steps, each of which carries a function of time. One such behavior might look like this: (+5) - 5 , (+6) - 10 , integral That is to say, this behavior starts off being a function that adds 5 to the current time. At 5 seconds, it steps, and the value changes to a function that adds 6 to time. At this point, the function that adds 5 to time can be garbage collected, along with the step. At 10 seconds, it becomes the integral of time, and the (+6) function, along with the next step is GCed. To come back to your example, I'd expect the behavior to look like this (using named functions only so that I can refer to them): i1 t = integral t i2 t = integral (2 * t) i3 t = integral (3 * t) f t = [i1 t, i2 t, i3 t)] g t = [i1 t, i3 t] f - 2, g After 2 seconds, both f, and the first step may be garbage collected. As g does not have any reference to i2 t, it too can be garbage collected. I hope that answers you more clearly. That's a yes. My first answer to how to implement the resetting counter would be someting along the lines of this, but I'm not certain it's dead right: e = (1+) $ mouseClick e' = (const 0) $ some event b = accumB 0 (e `mappend` e') i.e. b is the behavior got by adding 1 every time the mouse click event occurs, but resetting to 0 whenever some event occurs. Hmm. Looks somewhat complicated to me. Anyway, it doesn't really answer the fundamental question: how does one start a behavior/signal function at a particular point in time? In reactive, one doesn't. All behaviors and events have the same absolute 0 value for time. One can however simulate such a behavior, by using a `switcher`, or accumB. In practice, having potentially large numbers of behaviors running but not changing until a certain event is hit is not a major problem. This is not a problem because reactive knows that the current step contains a constant value, not a real function of time, because of this, no changes are pushed, and no work is done, until the event hits. I believe Conal is however working on semantics for relative time based behaviors/events though. Thanks Tom Davie ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Yampa vs. Reactive
Hi Henrik, On 18 Dec 2008, at 14:26, Henrik Nilsson wrote: Hi Tom, I'll have an attempt at addressing the questions, although I freely admit that I'm not as into Reactive as Conal is yet, so he may come and correct me in a minute. [...] Reactive has explicitly parameterized inputs. In your robot example I would expect something along the lines of data RobotInputs = RI {lightSensor :: Behavior Colour; bumbSwitch :: Event ()} -- A couple of example robot sensors robotBehavior :: RobotInputs - Behavior Robot robotBehavior sensors = a behovior that combines the light sensor and the bumb switch to stay in the light, but not keep driving into things. This looks exactly like Classical FRP. And if it is like Classical FRP behind the scenes, it nicely exemplifies the problem. In Classical FRP, Behavior is actually what I would call a signal function. When started (switched into), they map the system input signal from that point in time to a signal of some particular type. So, the record RobotInputs is just a record of lifted projection functions that selects some particular parts of the overall system input. Behind the scenes, all Behaviors are connected to the one and only system input. I don't think this is really true. Behaviors and Events do not reveal in their type definitions any relation to any system that they may or may not exist in. A Behavior can exist wether or not it is being run by a particular legacy adapter (a piece of code to adapt it to work as expected on a legacy, imperative computer). I can define an Event e = (+1) $ atTimes [0,10..] and use it as a Haskell construct without needing any system at all to run it within. Similarly I can define a Behavior b = accumB 0 e that depends on this event, completely independant of any system, or definition of what basic events and behaviors I get to interact with it. data UIInputs = UI {mousePoint :: Behavior Point; mouseClick :: Event (); ...} world :: UIInputs - Behavior World world = interpret mouse and produce a world with barriers, robots and lights in it Fine, of course, assuming that all behaviours share the same kind of system input, in this case UI input. But what if I want my reactive library to interface to different kinds of systems? The robot code should clearly work regardless of whether we are running it on a real hardware platform, or in a simulated setting where the system input comes form the GUI. In Classical FRP, this was not easily possible, because all combinators at some level need to refer to some particular system input type which is hardwired into the definitions. There are no hardwired definitions of what inputs I'm allowed to use or not use. If I would like my reactive program to run on a legacy robot which uses imperative IO, then I may write a legacy adapter around it to take those IO actions and translate them into Events and Behaviors that I can use. One such legacy adapter exists, called reactive-glut, which ties glut's IO actions into reactive events one can use. I could easily imagine several others, for example one that interacts with robot hardware and presents the record above to the behaviors it's adapting, or another still which works much like the interact function, but instead of taking a String - String, takes an Event Char - Event Char. Had Haskell had ML-style parameterized modules, that would likely have offered a solution: the libraries could have been parameterized on the system input, and then one could obtain say robot code for running on real hardware or in a simulated setting by simply applying the robot module to the right kind of system input. An alternative is to parameterize the behaviour type explicitly on the system input type: Behavior sysinput a This design eventually evolved into Arrowized FRP and Yampa. So, from your examples, it is not clear to what extent Reactive as addressed this point. Just writing functions that maps behaviours to behaviours does not say very much. On a more philosophical note, I always found it a bit odd that if I wanted to write a function that mapped a signal of, say, type a, which we can think of as type Signal a = Time - a to another signal, of type b say, in Classical FRP, I'd have to write a function of type Behavior a - Behavior b which really is a function of type (Signal SystemInput - Signal a) - (Signal SystemInput - Signal b) I find this unsatisfying, as my mapping from a signal of type a to a signal of type b is completely independent from the system input (or the function wouldn't have a polymorphic type). Yes, certainly that would be unsatisfactory. But I don't agree about the type of the function -- this really is a (Time - a) - (Time - a). It may be though that the argument (Time - a) is a system input from our legacy adapter, or an internal part of our program. * A clear separation between signals,
Re: [Haskell-cafe] Yampa vs. Reactive
Hi Henrik, On 18 Dec 2008, at 19:06, Henrik Nilsson wrote: Hi Tom, I don't think this is really true. Behaviors and Events do not reveal in their type definitions any relation to any system that they may or may not exist in. OK. So how does e.g. mousePoint :: Behavior Point get at the mouse input? unsafePerformIO? I.e. it is conceptually a global signal? main = adapter doSomeStuff -- Note here that different adapters provide different UIs. adapter :: (Behavior UI - Behavior SomethingFixedThatYouKnowHowToInterpret) - IO () adapter f = set up the system behaviors, pass them into f, grab the outputs, and do the something to render. doSomeStuff :: Behavior UI - Behavior SomethingFixedThatYouKnowHowToInterpret I'm not sure I understand you clearly. If I wish to apply a constant function to a signal, can I not just use fmap? The question is why I would want to (conceptually). I'm just saying I find it good and useful to be able to easily mix static values and computations and signals and computations on signals. Yep, I can see that, I think we need to agree to disagree on this front, I would prefer to use fmap, or $, while you prefer arrow syntax. You would certainly need to ask Conal on this point, but I have no reason to suspect that b' = [1,2,3,4,5] `stepper` listE [(1,[])] would not deallocate the first list once it had taken its step. It's not the lists that concern me, nor getting rid of a collection of behaviors all at once. The problem is if we ant to run a collection of behaviors in parallel, all potentially accumulating internal state, how do we add/delete individual behaviors to/from that collection, without disturbing the others? For the sake of argument, say we have the following list of behaviours: [integral time, integral (2 * time), integral (3 * time)] We turn them into a single behavior with a list output in order to run them. After one second the output is thus [1,2,3] Now, we want to delete the second behavior, but continue to run the other two, so that the output at time 2 is [2,6] Simply mapping postprocessing that just drops the second element from the output isn't a satisfactory solution. I'm not sure why mapping the function is not satisfactory -- It would create a new Behavior, who's internals contain only the two elements from the list -- that would expose to the garbage collector that the second element has no reference from this behavior any more, and thus the whole behavior could be collected. Yes, we really do get a shared n -- without doing that we certainly would see a large space/time leak. Interesting, although I don't see why not sharing would imply a space/time leak: if the behavior is simply restarted, there is no catchup computation to do, nor any old input to hang onto, so there is neither a time nor a space-leak? Anyway, let's explore this example a bit further. Suppose lbp is the signal of left button presses, and that we can count them by count lbp Then the question is if let n :: Behavior Int n = count lbp in n `until` some event -= n means the same as (count lbp) `until` some event -= (count lbp) If no, then Reactive is not referentially transparent, as we manifestly cannot reason equationally. If yes, the question is how to express a counting that starts over after the switch (which sometimes is what is needed). That's a yes. My first answer to how to implement the resetting counter would be someting along the lines of this, but I'm not certain it's dead right: e = (1+) $ mouseClick e' = (const 0) $ some event b = accumB 0 (e `mappend` e') i.e. b is the behavior got by adding 1 every time the mouse click event occurs, but resetting to 0 whenever some event occurs. Yep, such Behaviors are seperated in Reactive only by the method you create them with. I may use the `stepper` function to create a behavior that increases in steps based on an event occurring, or I may use fmap over time to create a continuously varying Behavior. But the question was not about events vs continuous signals. The question is, what is a behavior conceptually, and when is it started? E.g. in the example above, at what point do the various instances of count lbp start counting? Or are the various instances of count lbp actually only one? They are indeed, only 1. Or if you prefer, are beahviours really signals, that conceptually start running all at once at a common time 0 when the system starts? The answers regarding input behaviors like mousePosition, that n is shared, and the need to do catchup computations all seem to indicate this. But if so, that leaves open an important question on expressivity, examplified by how to start counting from the time of a switch above, and makes if virtually impossible to avoid time and space leaks in general, at least in an embedded setting. After all, something like count lbp can be compiled
Re: [Haskell-cafe] Yampa vs. Reactive
On 17 Dec 2008, at 03:14, Tony Hannan wrote: Hello, Can someone describe the advantages and disadvantages of the Yampa library versus the Reactive library for functional reactive programming, or point me to a link. Thanks, Tony P.S. It is hard to google for Yampa and Reactive together because reactive as in function reactive programming always appears with Yampa Advantages of Yampa: • Just at the moment, slightly more polished. • (maybe) harder to introduce space/time leaks. Advantages of Reactive: • More functional programming like -- doesn't require you to use arrows everywhere, and supports a nice applicative style. • In very active development. • Active community. Hope that helps -- my personal preference is that Reactive is the one I'd use for any FRP project at the moment. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Time for a new logo?
On 16 Dec 2008, at 18:40, Darrin Thompson wrote: \\ \\ \\ \\ \| \\ \\ --- \\ \\ // / \ // / \ \| // / /\\ --- // / / \\ Oh please no, please don't let the logo be something that says Haskell, it's all about monads. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Time for a new logo?
On 17 Dec 2008, at 09:26, Luke Palmer wrote: On Wed, Dec 17, 2008 at 1:10 AM, Thomas Davie tom.da...@gmail.com wrote: On 16 Dec 2008, at 18:40, Darrin Thompson wrote: \\ \\ \\ \\ \| \\ \\ --- \\ \\ // / \ // / \ \| // / /\\ --- // / / \\ Oh please no, please don't let the logo be something that says Haskell, it's all about monads. But it's a very pretty logo. And the idea of computation abstractions, Applicatives and Monads in particular, are a pretty big part of Haskell as a language and as a culture. Haskell, it's not exactly not about monads. No, I agree, but there's already a large body of literature that implies that Haskell is pretty much only about monads, and I'd hate to see the logo go the same way. Though I do take your point about abstractions being a major part of the language. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Yampa vs. Reactive
I'll have an attempt at addressing the questions, although I freely admit that I'm not as into Reactive as Conal is yet, so he may come and correct me in a minute. On 17 Dec 2008, at 15:29, Henrik Nilsson wrote: I have not used Reactive as such, but I did use Classic FRP extensively, and as far as I know the setup is similar, even if Reactive has a modern and more principled interface. Based on my Classic FRP experience (which may be out of date, if so, correct me), I'd say advantages of Yampa are: * More modular. Yampa's signal function type is explicitly parameterized on the input signal type. In Classic FRP and Reactive (as far as I know), the system input is implicitly connected to all signal functions (or behaviours) in the system. One consequence of this is that it there were issues with reusing Classical FRP for different kinds of systems inputs, and difficult to combine systems with different kinds of input. This was what prompted a parameterization on the type of the system input in the first place, which eventually led to Arrowized FRP and Yampa. I don't know what the current story of Reactive is in this respect. But having parameterized input has been crucial for work on big, mixed-domain, applications. (For example, a robot simulator with an interactive editor for setting up the world. The robots were FRP systems too, but their input is naturally of a different kind that the overall system input. It also turned out to be very useful to have an FRP preprocessor for the system input, which then was composed with the rest of the system using what effectively was arrow composition (), but called something else at the time.) I'm not sure how this was set up in a classic FRP system, so I'm unable to comment on how exactly it's changed. What I will say is that as far I understand what you're saying, Reactive has explicitly parameterized inputs. In your robot example I would expect something along the lines of data RobotInputs = RI {lightSensor :: Behavior Colour; bumbSwitch :: Event ()} -- A couple of example robot sensors robotBehavior :: RobotInputs - Behavior Robot robotBehavior sensors = a behovior that combines the light sensor and the bumb switch to stay in the light, but not keep driving into things. data UIInputs = UI {mousePoint :: Behavior Point; mouseClick :: Event (); ...} world :: UIInputs - Behavior World world = interpret mouse and produce a world with barriers, robots and lights in it robotInputs :: World - Behavior Robot - RobotInputs robotInputs = given a robot in a world, generate the robot's inputs * A clear separation between signals, signal functions, and ordinary functions and values, yet the ability to easily integrate all kinds of computations. Arguably a matter of taste, and in some ways more a consequence of the Arrow syntax than Arrows themselves. But in Classical FRP, one had to do either a lot of explicit lifting (in practice, we often ended up writing lifting wrappers for entire libraries), or try to exploit overloading for implicit lifting. The latter is quite limited though, partly due to how Haskell's type classes are organized and that language support for overloaded constants is limited to numerical constants. In any case, when we switched to arrows and arrow syntax, I found it liberating to not have to lift everything to signal functions first, but that I could program both with signals and signal functions on the one hand, and plain values and functions on the other, at the same time and fairly seamlessly. And personally, I also felt this made the programs conceptually clearer and easier to understand, My understanding is that Reactive is similar to Classical FRP in this respect. I agree and disagree here (that'll be the matter of taste creeping in). I agree that in Reactive you often spend a lot of keystrokes lifting pure values into either an Event or a Behavior. Having said that I'd argue that Yampa requires us to do this too -- it merely enforces the style in which we do it (we must do it with arrows). My personal opinion on this one is that I prefer the applicative interface to the arrow based one, because it feels more like just writing a functional program. * Classical FRP lacked a satisfying approach to handle dynamic collections of reactive entities as needed when programming typical video games for example. Yampa has a way. One can argue about how satisfying it is, but at least it fulfills basic requirements such that allowing logically removed entities to be truly removed (garbage collected). I don't know where Reactive stands here. I reserve judgement at the moment because I haven't explicitly written a reactive program involving a collection of behaviors, having said that, I see no reason why removing a value from the list in a Behavior [a],
Re: [Haskell-cafe] Time for a new logo?
On 15 Dec 2008, at 03:27, Don Stewart wrote: Could you attach it to the web page, http://haskell.org/haskellwiki/Haskell_logos/New_logo_ideas I've stuck a contender up there too. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Time for a new logo?
On 15 Dec 2008, at 12:43, Henning Thielemann wrote: On Sun, 14 Dec 2008, Don Stewart wrote: I noticed a new haskell logo idea on a tshirt today, http://image.spreadshirt.net/image-server/image/configuration/13215127/producttypecolor/2/type/png Simple, clean and *pure*. Instead of the we got lots going on of the current logo. Call me conservative, but I like the current logo more than the new suggestions. Why isn't it shown big on the welcome page of haskell.org? Are you referring to this logo? inline: Haskell.png In which case, it is shown on Haskell.org, unless there's another logo that I don't know about? Personally, this logo I find cluttered, and complicated, which I suspect conveys something to people thinking about using Haskell. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is unsafePerformIO safe here?
On 8 Dec 2008, at 01:28, John Ky wrote: Hi, Is the following safe? moo :: TVar Int moo = unsafePerformIO $ newTVarIO 1 I'm interested in writing a stateful application, and I wanted to start with writing some IO functions that did stuff on some state and then test them over long periods of time in GHCi. I was worried I might be depending on some guarantees that aren't actually there, like moo being discarded and recreated inbetween invocations of different functions. Define safe... In this case though, I would guess it's not safe. The compiler is free to call moo zero, one or many times depending on its evaluation strategy, and when it's demanded. It's possible that your TVar will get created many times, and different values returned by the constant moo. That sounds pretty unsafe to me. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell haikus
On 8 Dec 2008, at 03:02, Richard O'Keefe wrote: It's proving remarkably hard to pin down just what a Haiku is supposed to be in English. Taking the 3-5-3 syllable pattern, how about Soft rain falls while Haskell infers all my types. I always thought that Haikus had a seven five seven pattern, no? Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Functional version of this OO snippet
On 5 Dec 2008, at 13:40, Andrew Wagner wrote: Hi all, public interface IEngine { void foo(); void bar(string bah); ... } public class Program { public void Run(IEngine engine){ while(true){ string command = GetLine(); if (command.startsWith(foo)){ engine.foo(); } else if (command.startsWith(bar)){ engine.bar(command); ... else break; } In other words, I want to provide the same UI across multiple implementations of the engine that actually processes the commands. class IEngine a where foo :: a - String bar :: a - String - String run :: IEngine a = a - IO () run x = interact (unlines . processCommand x . lines) processCommand e c | foo `isPrefixOf` c = foo e | bar `isPrefixOf` c = bar e c That should about do it. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Functional version of this OO snippet
In other words, I want to provide the same UI across multiple implementations of the engine that actually processes the commands. You can ofc make my reply somewhat more functional by removing the interact from run, and just making it have type IEngine a = a - String - String. That way it will be nice and functional and composable and fit into a larger system. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Functional version of this OO snippet
You don't even need a type class, a simple data type is enough. Very true, but I disagree that you've made it functional in any way, IO is all about sequencing things, it's very much not a functional style data Engine = Engine { foo :: IO (), bar :: String - IO () } run e = processCommand e = getLine processCommand e c | foo `isPrefixOf` c = foo erun e | bar `isPrefixOf` c = bar e c run e | otherwise= return () This is much nicer done as functions from String - String, it becomes much more compassable, removes a sequential style from your code and stops processCommand depending on always working with the run function making it a bit more orthogonal. data Engine = Engine {foo :: String, bar :: String - String} run e = unlines . map (proccesCommand e) . lines processCommand e c | foo `isPrefixOf` c = foo e | bar `isPrefixOf` c = bar e c | otherwise = Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Functional version of this OO snippet
On 5 Dec 2008, at 16:42, Apfelmus, Heinrich wrote: Thomas Davie wrote: You don't even need a type class, a simple data type is enough. Very true, but I disagree that you've made it functional in any way, IO is all about sequencing things, it's very much not a functional style data Engine = Engine { foo :: IO (), bar :: String - IO () } This is much nicer done as functions from String - String Sure, I agree. I was just replicating foo and bar from the OP because I don't know what kind of effect he had in mind. I mean, instead of merely mapping each command in isolation, he could want to accumulate a value or read files or something. Sure, and he could then use a fold instead of a map. Reading files is problematic, but as long as you're only doing it once (the most common situation) is entirely fine wrapped up in an unsafePerformIO. Either way, the question was how to do it functionally, and to do it functionally, and with all of the nice shiny benefits we get with functional code like composibility and orthogonality, you need to do it with String - String. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] instance Applicative f = Applicative (StateT s f)
On 5 Dec 2008, at 16:54, Ross Paterson wrote: On Fri, Dec 05, 2008 at 04:35:51PM +0100, Martijn van Steenbergen wrote: How do I implement the following? instance Applicative f = Applicative (StateT s f) The implementation of pure is trivial, but I can't figure out an implementation for *. Is it possible at all, or do I need to require f to be a monad? Yes, because you need part of the value generated by the first computation, namely the state (inside the f), to construct the second one. You can do that in a Monad, but not in an Applicative. I don't think that's true, although I'm yet to decide if Applicative for State is possible. someState * someOtherState should take the value out of the first state, take the value out of the second state, apply one to the other, and return a new stateful value as the result. At no point in that description do I make mention of the previous state of one of these values. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Functional version of this OO snippet
On 5 Dec 2008, at 17:00, Duncan Coutts wrote: On Fri, 2008-12-05 at 16:50 +0100, Thomas Davie wrote: Sure, and he could then use a fold instead of a map. Reading files is problematic, but as long as you're only doing it once (the most common situation) is entirely fine wrapped up in an unsafePerformIO. No! Please don't go telling people it's entirely fine to use unsafePerformIO like that (or at all really). Exactly what isn't fine about it? Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Functional version of this OO snippet
On 5 Dec 2008, at 17:46, Duncan Coutts wrote: On Fri, 2008-12-05 at 17:06 +0100, Thomas Davie wrote: On 5 Dec 2008, at 17:00, Duncan Coutts wrote: On Fri, 2008-12-05 at 16:50 +0100, Thomas Davie wrote: Sure, and he could then use a fold instead of a map. Reading files is problematic, but as long as you're only doing it once (the most common situation) is entirely fine wrapped up in an unsafePerformIO. No! Please don't go telling people it's entirely fine to use unsafePerformIO like that (or at all really). Exactly what isn't fine about it? It's the antithesis of pure functional programming. It's so unsafe that we don't even have a semantics for it. Yes, but we also don't have semantics for IO, so it's no real surprise that we have none for something that runs an IO action. One needs pretty special justification for using unsafePerformIO and such cases should be hidden in libraries presenting pure interfaces, not used willy-nilly in general application code. Note that I'm not claiming that it's necessarily going to do bad things in the specific case you're imagining using it in. However just because it happens not to do bad things in this case does not mean that it's ok to use it here or in general. No, and I never said that it should be used more generally -- I was very careful that in this case I was following the rules for making sure that verifyItsSafeYourselfPerformIO was indeed safe. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] instance Applicative f = Applicative (StateT s f)
That would be incompatible with the ap of the monad where it exists, but it's worse than that. Which state will you return? If you return one of the states output by one or other of the arguments, you'll break one of the laws: pure id * v = v u * pure y = pure ($ y) * u You're forced to return the input state, so the Applicative would just be an obfuscated Reader. Which reminds me ofc, that there is a valid applicative for states (assuming the monad instance is valid): instance Applicative (StateT s f) where pure = return (*) = ap All monads are also applicatives ;) Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Suggestion: Syntactic sugar for Maps!
On 27 Nov 2008, at 19:59, circ ular wrote: I suggest Haskell introduce some syntactic sugar for Maps. Python uses {this: 2, is: 1, a: 1, Map: 1} Clojure also use braces: {:k1 1 :k2 3} where whitespace is comma but commas are also allowed. I find the import Data.Map and then fromList [(hello,1), (there, 2)] or the other form that I forgot(because it is to long!) to be to long... So why not {hello: 1, there: 2} ? In a similar vein, I suggest not only to not do this, but also for Haskell' to remove syntactic sugar for lists (but keep it for strings)! I have two (three if you agree with my opinions on other parts of the language) reasons for this: 1) It's a special case, that doesn't gain anything much. [a,b,c,d] is actually only one character shorter and not really any clearer than a:b:c:d:[]. 2) Removing it would clear up the ',' character for use in infix constructors. 3) (requiring you to agree with my opinions about tuples) it would allow for clearing up the tuple type to be replaced with pairs instead. (,) could become a *real* infix data constructor for pairs. This would make us able to recreate tuples simply based on a right associative (,) constructor. I realise there is a slight issue with strictness, and (,) introducing more bottoms in a tuple than current tuples have, but I'm sure a strict version of the (,) constructor could be created with the same semantics as the current tuple. Just my 2p Thanks Tom Davie ___ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
Re: [Haskell-cafe] Proof that Haskell is RT
On 12 Nov 2008, at 11:11, Andrew Birkett wrote: Hi, Is a formal proof that the Haskell language is referentially transparent? Many people state haskell is RT without backing up that claim. I know that, in practice, I can't write any counter- examples but that's a bit handy-wavy. Is there a formal proof that, for all possible haskell programs, we can replace coreferent expressions without changing the meaning of a program? (I was writing a blog post about the origins of the phrase 'referentially transparent' and it made me think about this) I think the informal proof goes along the lines of because that's what the spec says -- Haskell's operational semantics are not specified in the report, only IIRC a wooly description of having some sort of non-strict beta-reduction behavior. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proof that Haskell is RT
On 12 Nov 2008, at 14:47, Mitchell, Neil wrote: It's possible that there's some more direct approach that represents types as some kind of runtime values, but nobody (to my knowledge) has done that. It don't think its possible - I tried it and failed. Consider: show (f []) Where f has the semantics of id, but has either the return type [Int] or [Char] - you get different results. Without computing the types everywhere, I don't see how you can determine the precise type of []. Surely all this means is that part of the semantics of Haskell is the semantics of the type system -- isn't this expected? Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] announce [(InfixApplicative, 1.0), (OpenGLCheck, 1.0), (obj, 0.1)]
Dear Haskellers, I've just uploaded a few packages to Hackage which I have produced while working at Anygma. I thought that people might be interested in knowing about these: obj-0.1: A library for loading and writing obj 3D models. This is still an early version and rather limited, but it's a starting point. Features: • Load models, complete with normals, texture coordinates and materials • Compute normals where smoothing groups are specified • An example program to load an obj model and render it spinning on the screen. • Faster loading that Maya itself! Bugs: • Memory usage is rather large • The exposed API is rather limited To Dos: • Add support for loading groups • Support for smooth surfaces OpenGLCheck-1.0: A micro-package containing instances of Arbitrary for the data structures provided in Graphics.Rendering.OpenGL. InfixApplicative-1.0: A second micro-package containing a pair of functions -- (^) and (^) which can be used to provide an infix liftA2 thus: Suppose we wanted to calculate liftA2 (+) [1,2] [2,3], but are unhappy with the fact that (+) is no longer infix, we may now use [1,2] ^(+)^ [2,3] Thanks -- any comments are greatly appreciated! Tom Davie ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why 'round' does not just round numbers ?
[1] The Haskell 98 Report: Predefined Types and Classes http://haskell.org/onlinereport/basic.html This behaviour is not what I expect after reading the description at http://haskell.org/ghc/docs/latest/html/libraries/base/ Prelude.html#v:round . Given that this behaviour has caused a bit of confusion I think a change to the documention might be in order. Given that the documentation says round x returns the nearest integer to x, I think pretty much any behavior can be expected -- there's no single integer nearest to 2.5. The documentation certainly needs updated though. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Red-Blue Stack
On 25 Sep 2008, at 06:11, Matthew Eastman wrote: Hey guys, This is probably more of a question about functional programming than it is about Haskell, but hopefully you can help me out. I'm new to thinking about things in a functional way so I'm not sure what the best way to do some things are. I'm in a data structures course right now, and the assignments for the course are done in Java. I figured it'd be fun to try and implement them in Haskell as well. The part of the assignment I'm working on is to implement a RedBlueStack, a stack where you can push items in as either Red or Blue. You can pop Red and Blue items individually, but the stack has to keep track of the overall order of items. i.e. popping Blue in [Red, Red, Blue, Red, Blue] would give [Red, Red, Blue] I wanted to add my own 2p to this discussion. I'm not dead certain I understand what is meant by the statement above, so I'm going to make a guess that when we pop an item, the top item on the stack should end up being the next item of the same colour as we popped. In this interprettation, here's what I think is an O(1) implementation: data RBStack a = Empty | More RBColour a (RBStack a) (RBStack a) data RBColour = Red | Blue rbPush :: Colour - a - RBStack a - RBStack a rbPush c x Empty = Elem c x Empty Empty rbPush c x e@(More c' v asCs nextNonC) | c == c' = More c x e nextNonC | otherwise = More c x nextNonC e rbPop :: Colour - RBStack a - RBStack a rbPop c Empty = error Empty Stack, can't pop rbPop c (More c' v asCs nextNonC) | c == c' = asCs | otherwise = rbPop c nextNonC The idea is that an RBStack contains its colour, an element, and two other stacks -- the first one is the substack we should get by popping an element of the same colour. The second substack is the substack we get by looking for the next item of the other colour. When we push, we compare colours with the top element of the stack, and we swap around the same coloured/differently coloured stacks appropriately. When we pop, we jump to the first element of the right colour, and then we jump to the next element of the same colour. I hope I haven't missed something. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Red-Blue Stack
Thomas Davie wrote: In this interprettation, here's what I think is an O(1) implementation: ... rbPop :: Colour - RBStack a - RBStack a rbPop c Empty = error Empty Stack, can't pop rbPop c (More c' v asCs nextNonC) | c == c' = asCs | otherwise = rbPop c nextNonC ... Your pop doesn't seem to be in O(1) since you have to walk through the nextNonC stack if the colours don't match. Yep, this is still O(1) though, as you can guarentee that nextNonC will start with something of the correct colour. Thus the worst case here is that we walk once to the nextNonC element, and then do a different O(1) operation. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Red-Blue Stack
On 27 Sep 2008, at 20:16, apfelmus wrote: Thomas Davie wrote: Matthew Eastman wrote: The part of the assignment I'm working on is to implement a RedBlueStack, a stack where you can push items in as either Red or Blue. You can pop Red and Blue items individually, but the stack has to keep track of the overall order of items. i.e. popping Blue in [Red, Red, Blue, Red, Blue] would give [Red, Red, Blue] I wanted to add my own 2p to this discussion. I'm not dead certain I understand what is meant by the statement above, so I'm going to make a guess that when we pop an item, the top item on the stack should end up being the next item of the same colour as we popped. In this interpretation, here's what I think is an O(1) implementation: data RBStack a = Empty | More RBColour a (RBStack a) (RBStack a) data RBColour = Red | Blue rbPush :: Colour - a - RBStack a - RBStack a rbPush c x Empty = More c x Empty Empty rbPush c x e@(More c' v asCs nextNonC) | c == c' = More c x e nextNonC | otherwise = More c x nextNonC e rbPop :: Colour - RBStack a - RBStack a rbPop c Empty = error Empty Stack, can't pop rbPop c (More c' v asCs nextNonC) | c == c' = asCs | otherwise = rbPop c nextNonC The idea is that an RBStack contains its colour, an element, and two other stacks -- the first one is the substack we should get by popping an element of the same colour. The second substack is the substack we get by looking for the next item of the other colour. When we push, we compare colours with the top element of the stack, and we swap around the same coloured/differently coloured stacks appropriately. When we pop, we jump to the first element of the right colour, and then we jump to the next element of the same colour. I hope I haven't missed something. This looks O(1) but I don't understand your proposal enough to say that it matches what Matthew had in mind. Fortunately, understanding can be replaced with equational laws :) So, I think Matthew wants the following specification: A red-blue stack is a data structure data RBStack a with three operations data Color = Red | Blue empty :: RBStack a push :: Color - a - RBStack a - RBStack a pop :: Color - RBStack a - RBStack a top :: RBStack a - Maybe (Color, a) subject to the following laws -- pop removes elements of the same color pop Red . push Red x = id pop Blue . push Blue x = id -- pop doesn't interfere with elements of the other color pop Blue . push Blue x = push Blue x . pop Red pop Red . push Red x = push Red x . pop Blue -- top returns the last color pushed or nothing otherwise (top . push c x) stack = Just (c,x) top empty = Nothing -- pop on the empty stack does nothing pop c empty = empty These laws uniquely determine the behavior of a red-blue stack. Unfortunately, your proposal does not seem to match the second group of laws: (pop Blue . push Red r . push Blue b) Empty = pop Blue (push Red r (More Blue b Empty Empty)) = pop Blue (More Red r Empty (More Blue b Empty Empty)) = pop Blue (More Blue b Empty Empty) = Empty but = (push Red r . pop Blue . push Blue b) Empty = push Red r (pop Blue (More Blue b Empty Empty)) = push Red r Empty = More Red r Empty Empty The red element got lost in the first case. I don't think my proposal even meets the first set of laws -- I interpretted the question differently. pop Red . push Red 1 (More Blue 2 Empty (More Red 3 Empty Empty)) == More Red 3 Empty Empty Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hmm, what license to use?
Now I have fairly strong feelings about freedom of code and I everything I release is either under GPL or LGPL. What I like about those licenses is it protects freedom in a way that I think it should and it forces a sort of reciprocity which resonates very well with my selfishness. Re-licensing code under BSD is not something I'm willing to do without something that compensates for that reciprocity, and I can think of several kinds of compensation here but they all pretty much boil down to either fame or fortune. ;-) Sorry, this isn't the most relevant comment to the discussion, but I thought I'd add my own thought re the gpl/lgpl. My personal feeling is that the point of open source is to allow people the freedom to do what they want with a piece of code. The GPL/LGPL go completely against this idea, in that they restrict what I can do with the code to only things that are similarly licensed. I've seen this cause problems even in environments where there's no commercial gain to be had. Take for example the zfs file system. Sun have been kind enough to completely open source it. Unfortunately, linux users can never hope for stable version that works in the kernel, simply because the GPL stipulates that zfs must be relicensed to do so. That's my 2p's worth on why I use the BSD license over the GPL. In short, the GPL does not promote freedom, it promotes restrictions, just not the restrictions we've grown to hate from most companies. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hmm, what license to use?
On 26 Sep 2008, at 12:12, Janis Voigtlaender wrote: Manlio Perillo wrote: When I compare GPL and MIT/BSD licenses, I do a simple reasoning. Suppose a doctor in a battle field meet a badly injuried enemy. Should he help the enemy? I'm so glad I don't understand this ;-) Should you decide not to give someone something based on the fact that you either don't like them, or don't like what they'll do with the thing you give them. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hmm, what license to use?
On 26 Sep 2008, at 12:28, Dougal Stanton wrote: On Fri, Sep 26, 2008 at 11:17 AM, Thomas Davie [EMAIL PROTECTED] wrote: Should you decide not to give someone something based on the fact that you either don't like them, or don't like what they'll do with the thing you give them. That rather depends what you intend to give, doesn't it? :-) Though the analogy is inapt, because the GPL *doesn't* prevent use of software for things you don't like: http://www.gnu.org/licenses/gpl-faq.html#NoMilitary Sure it does -- it prevents the use of software for things that are closed source. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hmm, what license to use?
On 26 Sep 2008, at 17:51, Jonathan Cast wrote: On Fri, 2008-09-26 at 12:17 +0200, Thomas Davie wrote: On 26 Sep 2008, at 12:12, Janis Voigtlaender wrote: Manlio Perillo wrote: When I compare GPL and MIT/BSD licenses, I do a simple reasoning. Suppose a doctor in a battle field meet a badly injuried enemy. Should he help the enemy? I'm so glad I don't understand this ;-) Should you decide not to give someone something based on the fact that you either don't like them, or don't like what they'll do with the thing you give them. I think the standard answer to your question is that you get the enemy to *surrender* first, patch him up enough to move him, and then stick him in a POW camp for the duration, or until you get something in return for releasing him. I would never patch someone up so he can go back to *shooting* at me, or my friends. Never. Yet doctors all abide by the hypocratic(sp?) oath. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Red-Blue Stack
On 26 Sep 2008, at 19:18, Stephan Friedrichs wrote: apfelmus wrote: [..] Persistent data structures are harder to come up with than ephemeral ones, [...] Yes, in some cases it's quite hard to find a persistent solution for a data structure that is rather trivial compared to its ephemeral counterpart. My question is: Is there a case, where finding a persistent solution that performs equally well is *impossible* rather than just harder? I mean might there be a case where (forced) persistence (as we have in pure Haskell) is a definite disadvantage in terms of big-O notation? Do some problems even move from P to NP in a persistent setting? I'm fairly confident one could come up with a proof that you'll never go from P to NP because of it along the lines of treating all memory as being a list. Operations to modify memory are all in P (although slow), so any algorithm that relies on mutation to be in P will stay in P (although with a higher polynomial factor). Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Climbing up the shootout...
On 22 Sep 2008, at 11:46, Manlio Perillo wrote: Don Stewart ha scritto: Thanks to those guys who've submitted parallel programs to the language benchmarks game, we're climbing up the rankings, now in 3rd, and ahead of C :) This is cheating, IMHO. Some test comparisons are unfair. The first problem is with the thread-ring benchmark. Haskell uses the concurrent Haskell extension, but all other programs (with some exceptions) uses OS threads. This is unfair, as an example a C program can make use of the GNU threads library, for user space threads), but there is no such program. Who said that C *had* to use OS threads? The point of the shootout is to show the relative strengths of the various languages, one of the strengths of Haskell is some excellent lightweight thread support, this is not present in C, so C does badly on the tests that check how well you can deal with small threads. With parallel programs it is the same: other languages does not have a parallel version. Yes, and the new benchmarks are *specifically* designed to test how fast programs are on more recent multi-core hardware, so again, the other languages are welcome to submit parallel versions... It just turns out that Haskell is pretty damn good at doing parallelism. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Predicativity?
On 17 Sep 2008, at 07:05, Wei Hu wrote: Hello, I only have a vague understanding of predicativity/impredicativity, but cannot map this concept to practice. We know the type of id is forall a. a - a. I thought id could not be applied to itself under predicative polymorphism. But Haksell and OCaml both type check (id id) with no problem. Is my understanding wrong? Can you show an example that doesn't type check under predicative polymorphism, but would type check under impredicative polymorphism? In your application (id id) you create two instances of id, each of which has type forall a. a - a, and each of which can be applied to a different type. In this case, the left one gets applied to the type (a - a) and the right one a, giving them types (a - a) - (a - a) and (a - a) respectively. What will not type check on the other hand is: main = g id g h = h h 4 which needs something along the lines of rank-2 polymorphism. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Unicode and Haskell
On 10 Sep 2008, at 00:01, Mattias Bengtsson wrote: Today i wrote some sample code after a Logic lecture at my university. The idea is to represent the AST of propositional logic as an ADT with some convenience functions (like read-/show instances) and then later perhaps try to make some automatic transformations on the AST. After construction of the Show instances i found the output a bit boring and thought that some Unicode math symbols would spice things up. What happens can be seen in the attached picture (only 3k, that's ok right?). My terminal supports UTF-8 (when i do cat Logic.hs i can see the unicode symbols properly). What might be the problem? import Prelude hiding (print) import System.IO.UTF8 main = print lots of UTF8 Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] Pure hashtable library
On 27 Aug 2008, at 10:09, Bulat Ziganshin wrote: Hello Jason, Wednesday, August 27, 2008, 11:55:31 AM, you wrote: given these constraints, it should be just a 10-20 lines of code, and provide much better efficiency than any tree/trie implementations Much better efficiency in what way? instead of going through many levels of tree/trie, lookup function will just select array element by hash value and look through a few elements in assoc list: data HT a b = HT (a-Int) -- hash function (Array Int [(a,b)]) HT.lookup (HT hash arr) a = List.lookup (arr!hash a) a Which makes two assumptions. One is that your array is big enough (believable), and the other, that your font is big enough. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re[2]: [Haskell-cafe] Pure hashtable library
On 27 Aug 2008, at 10:39, Bayley, Alistair wrote: From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Thomas Davie Much better efficiency in what way? instead of going through many levels of tree/trie, lookup function will just select array element by hash value and look through a few elements in assoc list: data HT a b = HT (a-Int) -- hash function (Array Int [(a,b)]) HT.lookup (HT hash arr) a = List.lookup (arr!hash a) a Which makes two assumptions. One is that your array is big enough (believable), and the other, that your font is big enough. ... and the other, that your font is big enough. Que? This is lost on me. Care to explain? Sorry, I probably should have sent that, it was a dig at the fact that the message was sent with all the text in font-size 930 or so. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Propeganda
On 24 Aug 2008, at 05:04, Albert Y. C. Lai wrote: Dear friends, Haskell prevents more errors and earlier. This is honest, relevant, good advocacy. Dear friends, segfaults are type errors, not logical errors. Why would you indulge in this? It's even less relevant than bikeshed colours. Is it? when I write C I spend a lot of my time sat in gdb trying to figure out where the error that the Haskell type system would have caught for me is. This is *very* relevant, it's right at the bottom line of whether I'm more productive in Haskell or in C. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Speed Myth
On 24 Aug 2008, at 01:26, Brandon S. Allbery KF8NH wrote: On 2008 Aug 23, at 18:34, Krzysztof Skrzętnicki wrote: Recently I wrote computation intensive program that could easily utilize both cores. However, there was overhead just from compiling with -threaded and making some forkIO's. Still, the overhead was not larger than 50% and with 4 cores I would probably still get the results faster - I didn't experience an order of magnitude slowdown. Perhaps it's the issue with OS X. All that's needed for multicore to be a *lot* slower is doing it wrong. Make sure you're forcing the right things in the right places, or you could quietly be building up thunks on both cores that will cause lots of cross-core signaling or locking. And, well, make sure the generated code isn't stupid. Quite possibly the PPC code is an order of magnitude worse than the better-tested Intel code. Except that the test was running on a Core2Duo, and it runs very fast when ghc does the threading on one core. My personal guess is that to do it properly threaded requires *lots* of kernel boundary crosses to do the locking etc on OS X (being a nearly-micro-kernel). The test program was almost 100% made up of thread locking code. Bob___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Speed Myth
On 24 Aug 2008, at 06:31, Thomas M. DuBuisson wrote: That's really interesting -- I just tried this. Compiling not using -threaded: 1.289 seconds Compiling using -threaded, but not running with -N2: 3.403 seconds Compiling using -threaded, and using -N2: 55.072 seconds I was hoping to see a relative improvement when introducting an opportunity parallelism in the program, so I made a version with two MVars filled at the start. This didn't work out though - perhaps some performance stands to be gained by improving the GHC scheduler wrt cpu / OS thread affinity for the Haskell threads? For the curious: -O2: 7.3 seconds (CPU: 99.7% user) -O2 -threaded: 11.5 seconds (CPU: 95% user, 5% system) -O2 -threaded ... +RTS -N2: killed after 3 minutes (CPUs: 15% user, 20% system) Thats quite a lot of CPU time going to the system. Specs: Linux 2.6.26 (Arch) x86_64 Intel Core 2 Duo 2.5GHz Hmm thanks, that's interesting -- I was think it was probably caused by OS X, but it appears to happen on Linux too. Could you try running the old code too, and see if you experience the order of magnitude slowdown too? Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] String to Double conversion in Haskell
On 24 Aug 2008, at 23:23, Don Stewart wrote: dmehrtash: I am trying to convert a string to a float. It seems that Data.ByteString library only supports readInt.After some googling I came accross a possibloe implementation: [1]http://sequence.svcs.cs.pdx.edu/node/373 Use the bytstring-lexing library, http://hackage.haskell.org/cgi-bin/hackage-scripts/package/bytestring-lexing Which provides a copying and non-copying lexer for doubles, readDouble :: ByteString - Maybe (Double, ByteString) unsafeReadDouble :: ByteString - Maybe (Double, ByteString) Incidentally, is there any reason we can't have this for Lazy BSes? Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell Speed Myth
Lo guys, I thought you'd like to know about this result. I've been playing with the debian language shootout programs under OS X, looking at how fast Haskell code is compared to C on OS X, rather than linux. Interestingly, Haskell comes out rather better on OS X than it did on Linux. Here's my results (times in seconds): C Haskell Relative speed Inverse Binary Trees6.842 1.228 0.179479684302835 5.57166123778502 Fannkuch5.683 15.73 2.767904275910610.361284170375079 Mandelbrot 1.183 2.287 1.933220625528320.517271534761697 nbody 10.275 16.219 1.578491484184910.633516246377705 nsieve 0.167 0.253 1.514970059880240.660079051383399 nsieve-bits 0.471 0.713 1.513800424628450.660589060308555 partial sums1.047 1.313 1.254059216809930.797410510281797 pidigits1.238 1.4 1.130856219709210.884285714285714 recursive 1.554 3.594 2.312741312741310.432387312186978 spectral-norm 27.939 19.165 0.685958695729983 1.45781372293243 threadring 91.284 1.389 0.0152162481924543 65.719222462203 - Averages1.353336204328930.738914688605306 Some notes: Hardware: 2Ghz Core2Duo, enough ram to not worry about paging Some programs are not included, this is because the C code produced compile errors. The Haskell code appears to be portable in *all* cases. The average slowdown for running Haskell is only 1.3 times on OS X! That's pretty damn good. I'm sure some people will say yeh, but you have to optimise your code pretty heavily to get that kind of result. Interestingly, the programs that have the biggest speed advantage over C here are also the most naïvely written ones. The thing that seems to make C slower is the implementation of malloc in OS X. This has a couple of implications, first, if apple replaced the malloc library, it would probably push the result back to the 1.7 times slower we see under Linux. Second, Fannkuch can probably be optimised differently for Haskell to look even better -- at the moment, the haskell code actually uses malloc! Finally, that threading example... WOW! 65 times faster, and the code is *really* simple. The C on the other hand is a massive mess. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Speed Myth
On 23 Aug 2008, at 20:01, Luke Palmer wrote: 2008/8/23 Thomas Davie [EMAIL PROTECTED]: Finally, that threading example... WOW! 65 times faster, and the code is *really* simple. The C on the other hand is a massive mess. I've been wondering about this, but I can't check because I don't have a multi core cpu. I've heard GHC's single threaded runtime is very very good. What are the results for the threading example when compiled with -threaded and run with at least +RTS -N2? That's really interesting -- I just tried this. Compiling not using -threaded: 1.289 seconds Compiling using -threaded, but not running with -N2: 3.403 seconds Compiling using -threaded, and using -N2: 55.072 seconds Wow! Haskell's runtime really is a *lot* better than trying to use operating system threads. I wonder if there's a point at which it becomes better to use both CPUs, or if the overhead of using OS threads for that problem is just too high. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell Propeganda
Today I made an interesting discovery. We all know the benefits of a strong type system, and often tout it as a major advantage of using Haskell. The discovery I made, was that C programmer don't realise the implications of that, as this comment highlights: http://games.slashdot.org/comments.pl?sid=654821cid=24716845 Apparently, no one realises that a SEGFAULT is a type error, just not a very helpful one. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Propeganda
On 23 Aug 2008, at 22:36, Matus Tejiscak wrote: On So, 2008-08-23 at 22:16 +0200, Thomas Davie wrote: Today I made an interesting discovery. We all know the benefits of a strong type system, and often tout it as a major advantage of using Haskell. The discovery I made, was that C programmer don't realise the implications of that, as this comment highlights: http://games.slashdot.org/comments.pl?sid=654821cid=24716845 Apparently, no one realises that a SEGFAULT is a type error, just not a very helpful one. Bob Type errors are useful because they emerge at compile time and prevent you from compiling (and running) a broken program. A segfault is a runtime error and as such provides no such guide -- it may or may not arise and you don't know something's wrong until sigsegv kills your app, screws all your data, crashes the airplane etc. (without the possibility to tell whether/when it will happen). I guess I didn't express my point very clearly... That C programmers apparently don't realise that a type system that's sound will give them something -- i.e. their programmer won't ever segfault. I wonder when we try to advertise Haskell if we should be saying we can give you programs that never segfault, instead of we have a strong type system. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Propeganda
On 23 Aug 2008, at 23:10, Tim Newsham wrote: I guess I didn't express my point very clearly... That C programmers apparently don't realise that a type system that's sound will give them something -- i.e. their programmer won't ever segfault. I wonder when we try to advertise Haskell if we should be saying we can give you programs that never segfault, instead of we have a strong type system. That would be overpromissing. You can definitely get segfaults in Haskell. The obvious example being http://codepad.org/Q8cgS6x8 but many less contrived and more unexpected examples arise naturally (unfortunately). By the way, the Java camp has (correctly) been touting this argument for quite a while. I'd be interested to see your other examples -- because that error is not happening in Haskell! You can't argue that Haskell doesn't give you no segfaults, because you can embed a C segfault within Haskell. Bob ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe