Re: [Haskell-cafe] Maintaining lambdabot
On 17 February 2013 18:03, Jan Stolarek jan.stola...@p.lodz.pl wrote: ... This changes would be quite invasive and code wouldn't be compatible with the lambdabot repo on haskell.org. So before I start making any of them I would like to hear from the community if such changes in the source code of lambdabot would be considered helpful and acceptable. Janek I say go for it! I'll be quite happy to start running your new code in #haskell as soon as we get GHC 7.6 installed on Jason's Linode account. :) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] lambdabot-4.2.3.3
Lambdabot doesn't have a maintainer. On 18 July 2012 08:33, Francesco Mazzoli f...@mazzo.li wrote: At Wed, 18 Jul 2012 15:14:47 +0400, Dmitry Malikov wrote: A few days ago I tried to install lambdabot package from hackage (4.2.3.2). Cabal install failed. Then I found DanBurton's github repo with some approaches to make lambdabot install fixed. All dependency packages (IOSpec, numbers) was already fixed. So I add FlexibleInstances extension to cabal file and upload package to hackage. I hope that did everything right. Did you ask the maintainer first? You should never just upload a package that you are not maintaining before asking first the maintainer and then haskell-cafe if you receive no response. If you did not ask, please do not do that again and tell the maintainer - which in this case you should have no problem contacting, since Cale is often online on IRC. -- Francesco * Often in error, never in doubt ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution
2009/7/28 Sittampalam, Ganesh ganesh.sittampa...@credit-suisse.com: Cale Gibbard wrote: There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly. [...] What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily. I think this idea would severely damage compositionality. One example of this is that it would make it substantially less likely that subexpressions could be abstracted into a separate declaration without giving a type signature to fix the type of the new declaration. Ganesh Ah, now that does seem a rather good point, the worry being that generalisation happens at the top of that new declaration, thereby suddenly making more than one of the options typecheck, even though the function/value being defined is still used at the appropriate type. That might be enough of a hindrance to kill the idea, yeah, though I wonder exactly how often it would happen relative to the annoyance of always having to make the obvious qualifications. It would be nice to have as an extension at least, I think, to get a sense for this. I wouldn't advocate putting anything in a standard which we haven't actually tried of course. (However, I also think we should also have a bit less respect in regard to keeping things in line with the standard, so long as a compliant implementation exists...) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution
2009/7/27 Johannes Waldmann waldm...@imn.htwk-leipzig.de: While browsing Haskell-Prime I found this: http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolution This is not some April Fool's day hoax? Because, it might actually turn Haskell into a somewhat usable (and marketable) language ... well, you know what I mean. You would be arguing against it then? ;) Is there 'ghc -XTypeDirectedNameResolution' yet? I'm pretty sure there is not. I don't really care for this proposal because it involves yet more overloading of (.) which rather annoys me. Composition is by far the most common infix operator in my code, and I consider its choice for module qualification a minor, but irritating mistake. (It makes composition chains with qualified names just that much harder to scan with your eyes.) There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly. I can see some corner cases where this might lead to a combinatorial explosion of possible qualifications to try, however, I have a feeling that such cases wouldn't really happen in practice. It is admittedly a bit like a very restricted sort of ad-hoc polymorphism, but given that the modules are compiled separately, I think the actual interactions with the workings of the type system are not really very significant. (It's just like you tried each set of possible qualifications, compiling the whole thing each time, and checking to see if there's a unique way to get the module to compile.) This would mean that if we had, say, Data.List, Data.Map and Data.Set imported, and there was an occurrence of insert that happened to be applied to a couple of values and then something known to be a Map, it would behave as if you'd written Data.Map.insert, because that's the only thing which could possibly make sense. If there were ambiguity about which insert you meant, it would still be an error, and you might have to qualify it explicitly. What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution
2009/7/27 Jules Bean ju...@jellybean.co.uk: Cale Gibbard wrote: What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily. A disadvantage - and this is not a No vote, just a remark - is that when trying to debug the expression: foo bar baz quux if I type :t bar I will presumably get an ambiguity error, and I may have no easy way of working out *which* bar was actually intended in this line of code. I don't know how much of a burden this is, but it feels like a burden to writing/debugging/understanding code. Jules There certainly do seem like some cases where it would help the person reading the code to qualify which module you meant, so clearly if it's not very obvious which selection of modules produces the unique way to get things to typecheck, that's not very good. Perhaps there should at least be the restriction that there must exist a chain of individual choices made where there was a unique possibility at each step. This ensures that you never have to backtrack in deciding which modules things are intended to come from. Of course, in cases where it's still not obvious, it'd still be possible to make the qualification explicit. The goal is to eliminate the need to explicitly qualify in the cases where it's entirely obvious what the qualification should be. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution
2009/7/27 Jason Dagit da...@codersbase.com: On Mon, Jul 27, 2009 at 9:29 AM, Cale Gibbard cgibb...@gmail.com wrote: 2009/7/27 Johannes Waldmann waldm...@imn.htwk-leipzig.de: While browsing Haskell-Prime I found this: http://hackage.haskell.org/trac/haskell-prime/wiki/TypeDirectedNameResolution This is not some April Fool's day hoax? Because, it might actually turn Haskell into a somewhat usable (and marketable) language ... well, you know what I mean. You would be arguing against it then? ;) Is there 'ghc -XTypeDirectedNameResolution' yet? I'm pretty sure there is not. I don't really care for this proposal because it involves yet more overloading of (.) which rather annoys me. Composition is by far the most common infix operator in my code, and I consider its choice for module qualification a minor, but irritating mistake. (It makes composition chains with qualified names just that much harder to scan with your eyes.) There was a great related idea on #haskell the other day: Make explicit qualification unnecessary whenever there is a *unique* choice of module qualifications from those imported which would make the expression typecheck. Ambiguities would still need to be qualified, but I feel that this would eliminate 99% of all ugly qualified names from code. It would be especially good in the case of infix operators, which as far as I know, nobody actually enjoys qualifying explicitly. My biggest fear is that of usability. If I understand you correctly, then as you change module imports you change the meaning of the code in potentially non-obvious ways. So this isn't too different than using unqualified imports and flipping between two modules that export the same function. Except that as you increase the 'automatic'ness of it, it has the potential to trip up people. Well, yes, but only insofar as you can already cause that to happen. Simply adding a new module import might force you to qualify some names, as you mention below (and as it can already force you to do), but will never cause the meaning to otherwise change, since it's not doing something like picking the first module which works (which would really be bad). It's only fixing the qualification in places where there's *exactly one* qualification that would work. Similarly, removing a module import will not cause the meaning to change, supposing that the module still compiles, and if you can do that, you can still be certain that nothing from that module was being used. Of course, put the two together in one step, and you can change semantics arbitrarily, but that's already the case, and I don't think that's necessarily something to be avoided. I think what is the worse case is, you add a module import and suddenly you have an error in code that was previously compiling. Suddenly the auto-disambiguate stops working in a chunk of code that was fine before. When working with code that others have written this would confuse me. I think I would then start using qualified imports everywhere just to work around this feature :) You can already import a module and suddenly need to disambiguate expressions where a name gets used. This just relieves you of that responsibility sometimes. Consider the case where you have a module with a bunch of uses of stuff from Data.List and the Prelude, and you decide that you need Data.Map for a new function. If you import Data.Map, all of a sudden, you need to qualify half of the Prelude and Data.List stuff or it would be ambiguous. (Or import Data.Map qualified.) Yes, I realize it would be an extension, but it would be an extension that I suspect I would try to avoid. What do people think of this idea? Personally, it really annoys me whenever I'm forced to give explicit module qualifications, and I think this would really help. It would also subsume the DisambiguateRecordFields extension rather handily. I like explicit module qualifications more or less. I certainly don't mind reading them and I don't care if it takes 2 seconds longer to type something. You asked for my thoughts and there they are :) Well, thanks :) I do think some thought has to be put into limiting the power of it, so that we don't end up with situations where only a computer could figure out what's going on. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] nubBy seems broken in recent GHCs
2009/6/6 Bertram Felgenhauer bertram.felgenha...@googlemail.com: Interesting. This was changed in response to http://hackage.haskell.org/trac/ghc/ticket/2528 | Tue Sep 2 11:29:50 CEST 2008 Simon Marlow marlo...@gmail.com | * #2528: reverse the order of args to (==) in nubBy to match nub | This only makes a difference when the (==) definition is not | reflexive, but strictly speaking it does violate the report definition | of nubBy, so we should fix it. It turns out that 'elem' differs from the report version and should have its comparison reversed. Of course that would only ever matter for broken Eq instances. However, the report also states that the nubBy function may assume that the given predicate defines an equivalence relation. http://haskell.org/onlinereport/list.html#sect17.6 So I'm not sure there's anything to be fixed here - although backing out the above patch probably won't hurt anybody. Bertram Yeah, while most Eq instances really do define an equivalence relation (at least extensionally), and the Report authors probably thought in terms of an equivalence relation (even going so far as to say that nubBy can assume its parameter is one, which I think was a mistake), I think nubBy has a much more general use. It does a generalised kind of sieving, specifically, nubBy f xs is the unique subsequence of xs that: 1) Has the property that if x and y are elements such that x occurs before y in it, then f x y is False. 2) The sequence of indices of selected elements is lexicographically minimum for all subsequences satisfying condition 1. (That is, it always picks the earliest elements possible.) I think this is how it ought to be specified. Similarly, groupBy f xs is (and should be) the unique list of contiguous sublists of xs such that: 1) concat (groupBy f xs) = xs 2) If x is the head of any of the sublists and y is any other element of that sublist, then f x y 3) The sequence of lengths of the sublists is lexicographically maximum for all lists satisfying the first two properties (That is, it always prefers adding elements to an earlier group to starting a new group.) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] nubBy seems broken in recent GHCs
According to the Report: nubBy:: (a - a - Bool) - [a] - [a] nubBy eq [] = [] nubBy eq (x:xs) = x : nubBy eq (filter (\y - not (eq x y)) xs) Hence, we should have that nubBy () (1:2:[]) = 1 : nubBy () (filter (\y - not (1 y)) (2:[])) = 1 : nubBy () [] = 1 : [] However in ghc-6.10.3: Prelude Data.List nubBy () [1,2] [1,2] The order of the parameters to the function which is passed to nubBy is *important* since the function might not be an equivalence relation. nubBy is more generally useful for sieving even when the relation is not symmetric. groupBy, for a similar reason, has applications for grouping beyond those provided by equivalence relations, and I think we should be able to rely on its behaviour. Moreover, I contend that the Report's order is more sensible, since the parameters to the relation stay in the left-to-right order in which they occurred in the list. It would be good if this could get fixed for 6.10.4. - Cale PS: There is actually another implementation of groupBy which would *also* be useful to have, but which is exactly the same for equivalence relations as groupBy, and it compares adjacent elements rather than the first element of a group with successive ones. (groupByAdjacent or something would be a reasonable name.) An implementation of it can be found here: http://hpaste.org/fastcgi/hpaste.fcgi/view?id=5595 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Buttons and Clicks - State Monad
You might be misunderstanding the purpose of the State Int monad somewhat. A computation of type State Int a is internally represented by a function of type Int - (Int, a). When you call runState, you effectively apply this pure function to an initial state, and get a final state and result. You won't be able to do anything with the State monad that you couldn't already do with such a function, it's more or less a notational convenience. In particular, the state of your counter will not be preserved between calls to runState unless you arrange that the final state returned from the last call to runState is passed along as the initial state in the next one. Of course, this effectively defeats the purpose of using the State monad in the first place. Since event handlers in wxHaskell must be in the IO monad, there's no machinery in place to handle forwarding your state along, so the State monad is not terribly useful here. On the other hand, it's rather easy to write a function of type State s a - IORef s - IO a which takes the initial state from the IORef and updates the IORef with the new state. - Cale 2009/1/31 guenni68 red...@fedoms.com: Hi, in this piece here http://moonpatio.com/fastcgi/hpaste.fcgi/view?id=1083#a1083 I'm trying to create a button that, every time when clicked, increases a counter by one and does a putStrLn of the counters current value. I'm trying to write this without any use of IORef but merely using the state monad. Can anybody show me how to do this? Günther The UI Code is wxHaskell ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt
off the top of their head what the difference between an epimorphism and a hylomorphism is? They're not even from the same branch of mathematics. Epimorphisms are defined in category theory, as arrows which can be cancelled when they appear on the right of a composite, that is, if f is an epimorphism, and g . f = h . f, then g = h. Such arrows are somewhat comparable to surjective functions. Hylomorphisms are from recursion theory. They are the composite of an anamorphism (which builds up a recursive structure from an initial seed) with a catamorphism, (which replaces the constructors in that recursive structure with other functions). Terminology has value, in that it allows you to see things in a new way which is clearer than what could otherwise be achieved. Any programmer worth their salt should be comfortable absorbing a new term, the same way they learn a new library function. We should remember that Haskell's beauty is not an accident. It is proportional to the amount of effort which went into building the solid mathematical foundations describing its semantics, and designing a language which reflected those semantics as clearly as possible. Discarding those foundations in an attempt to get more users is a price I would personally never want to see us pay. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt
2009/1/15 Sittampalam, Ganesh ganesh.sittampa...@credit-suisse.com: Lennart Augustsson wrote: I think the documentation should be reasonably newbie-friendly too. But that doesn't mean we should call Monoid Appendable. Appendable is just misleading, since Monoid is more general than appending. Then why does it have a member named 'mappend'? :-) Ganesh Good question. The names of the methods of the Monoid class are inappropriate. My personal preference would be: class Monoid m where zero :: m (++) :: m - m - m (in the Prelude of course) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt
If you're learning Haskell, which communicates the idea more clearly: * Appendable or * Monoid I can immediately figure out what the first one means. With the second, I could refer to the GHC documentation, which does not describe what a Monoid does. Or read a wikipedia article about a branch of mathematics and try to figure out how it applies to Haskell. However, Appendable carries baggage with it which is highly misleading. Consider, for instance, the monoid of rational numbers under multiplication (which, by the way, is quite useful with the writer transformed list monad for dealing with probabilities) -- you can claim that multiplication here is a sort of appending, perhaps, but it's not really appropriate. Modular addition, or multiplication in some group is even farther from it. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Comments from OCaml Hacker Brian Hurt
Yes! The library documentation tree has a way of making everything seem equally important, when that is not the case. This is why we need well-crafted tutorials and books. 2009/1/15 David Fox dds...@gmail.com: Monoid isn't something I came across and didn't understand, its something I should have been using for a long time before I discovered it. But it never jumped out at me when I was browsing the library documentation tree. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt
While you're absolutely correct and I agree with you, to be fair, essentially all mathematicians have a sense of rigourisability (whether they recognise it or not), which is a peculiar standard that they apply to everything they hear or read. The level of rigour at which mathematicians communicate is designed not to bore the listener with details that they could easily supply for themselves, being an intelligent mathematician, and not a mechanical abstraction. - Cale 2009/1/15 Derek Elkins derek.a.elk...@gmail.com: Actually programming requires -far more- precision than mathematics ever has. The standards of formal and precise that mathematicians use are a joke to computer scientists and programmers. Communication is also more important or at least more center stage in mathematics than programming. Mathematical proofs are solely about communicating understanding and are not required to execute on a machine. On Thu, 2009-01-15 at 18:27 +, Lennart Augustsson wrote: That's very true. But programming is one where mathematical precision is needed, even if you want to call it something else. On Thu, Jan 15, 2009 at 6:04 PM, Paul Moore p.f.mo...@gmail.com wrote: Mathematical precision isn't appropriate in all disciplines. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Comments from OCaml Hacker Brian Hurt
2009/1/15 Andrew Coppin andrewcop...@btinternet.com: OK, well then my next question would be in what say is defining configuration files as a monoid superior to, uh, not defining them as a monoid? What does it allow you to do that you couldn't otherwise? I'm not seeing any obvious advantage, but you presumably did this for a reason... I can't speak from the perspective of the Cabal developers, but combining configurations with partial information using a monoid operation is generally a good way to structure things. Basically, this would be analogous to the way that the First monoid (or the Last monoid) works, but across a number of fields. You have an empty or default configuration which specifies nothing that serves as the identity, and then a way of layering choices together, which is the monoid operation. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] view patterns
2008/11/5 Cetin Sert [EMAIL PROTECTED]: interactive:1:4: Warning: Pattern match(es) are overlapped In the definition of `emp': emp ((has - True)) = ... emp ((has - False)) = ... Why do I get this error in ghc or when I try to compile a file with view patterns? (using -fglasgow-exts and -XViewPatterns, ghc 6.10.1) This is a bug which appears to be known about: http://hackage.haskell.org/trac/ghc/ticket/2395 - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] OT: Haskell desktop wallpaper?
Hello! I worked on this for a few minutes in the GIMP. Thought it might look nice spraypainted on asphalt. http://cale.yi.org/LambdaAsphalt.jpg (this one's at 1280x1024) - Cale 2008/10/8 R. Emre Başar [EMAIL PROTECTED]: Hi, I created a wallpaper from The.Monad.Reader logo. You can find two versions here: http://tonguc.name/images/lambda-1280x800.png http://tonguc.name/images/lambda-1024x768.png Magnus Therning der ki: This morning I got tired of my desktop wallpaper (one that ships with Debian's Gnome packages). Typing haskell desktop wallpaper yeilded a lot of links to wallpapers with Colleen Haskell, while she's a beautiful lady it wasn't exactly what I was hoping to find. Hence this email. Where can I find some nice wallpapers inspired by Haskell, or maybe even created by Haskell code? Oh yes, wallpapers related to XMonad would do, I suppose ;-) /M -- Magnus Therning(OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus@therning.org http://therning.org/magnus identi.ca|twitter: magthe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe -- R. Emre Başar İstanbul Bilgi University Department of Computer Science -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) iD8DBQFI7SYA86wLPNcWbrwRAh1rAJ0ZAFf3eI20hf8sfLhUCu6HXjBbLgCfQTwZ DoHCzU7eWn5xsDwJyNusLzk= =OMnR -END PGP SIGNATURE- ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] a really juvenile question .. hehehehe ;^)
2008/10/6 Don Stewart [EMAIL PROTECTED]: dagit: data and newtype vary in one more subtle way, and that's how/when they evaluate to bottom. Most of the time they behave identically, but in the right cases they act sightly differently. newtype is usually regarded as more efficient than data. This is because the compiler can choose to optimize away the newtype so that it only exists at type check time. I think this is also possible with data in some, but not all, uses. The compiler *must* optimise away the use. They're sort of 'virtual' data, guaranteed to have no runtime cost. I'm not sure that I'd want to be that emphatic about what an implementation *must* do regarding something so operational. The informal semantics of pattern matching in the Report says: Matching the pattern con pat against a value, where con is a constructor defined by newtype, depends on the value: * If the value is of the form con v, then pat is matched against v. * If the value is _|_, then pat is matched against _|_. That is, constructors associated with newtype serve only to change the type of a value. This clearly has an implementation which introduces no overhead, which one can expect from good implementations of the language. There are obviously implementations of these semantics which do introduce overhead as well though, so I would be hesitant to make any requirement like that. We can say however that newtypes have no additional runtime cost in GHC regardless of the optimisation level you pick. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Control.Parallel.Strategies
Out of interest, have you tried using parListChunk to break the work into larger pieces? It seems rather unlikely to help with this case as the parts are already pretty large, but perhaps it's worth a shot. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Beginners arrow question
On 05/04/2008, Paul Johnson [EMAIL PROTECTED] wrote: myProblem :: (ArrowXml a) - a XmlTree String myProblem = proc xml do name - getAttrValue name - xml fmt - arr lookupFormatter - name fmt - xml GHC has a special syntax for using ArrowApply (which HXT is an instance of). Whenever the expression to the left of - needs to involve a local variable, you can replace - with - and it should work. To understand better what it means, you can read http://www.haskell.org/ghc/docs/latest/html/users_guide/arrow-notation.html -- it's basically just a shorthand for using 'app'. myProblem :: (ArrowXml a) - a XmlTree String myProblem = proc xml do name - getAttrValue name - xml fmt - arr lookupFormatter - name fmt - xml Try that and see how it goes. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Ackermann Function Memoization, GHC Weird Output or Bug?
Here's the bug: {-# INLINE safeIndex #-} safeIndex :: Ix i = (i, i) - Int - i - Int safeIndex (l,u) n i = let i' = unsafeIndex (l,u) i in if (0 = i') (i' n) then i' else error Error in array index unsafeIndex here is just a function which transforms indices into Int indices into the flat array and does no checking of validity. Then safeIndex simply checks if the result is nonnegative and less than the size of the array. Whoops! The actual test to see if the index was valid in the first place didn't actually get performed! - Cale On 14/03/2008, Eric Mertens [EMAIL PROTECTED] wrote: Smaller example of this behavior: array ((0,0),(1,1)) [((1,1),6)] ! (0,3) 6 -- Eric Mertens ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Doubting Haskell
On 04/03/2008, Luke Palmer [EMAIL PROTECTED] wrote: On Tue, Mar 4, 2008 at 4:16 AM, Ketil Malde [EMAIL PROTECTED] wrote: Paul Johnson [EMAIL PROTECTED] writes: I'm surprised you found the significant whitespace difficult. I wonder if this has something to do with the editor one uses? I use Emacs, and just keep hitting TAB, cycling through possible alignments, until things align sensibly. I haven't really tried, but I can imagine lining things up manually would be more painful, especially if mixing tabs and spaces. Especially if mixing tabs and spaces indeed. Haskell does the Python thing of assuming that a tab is 8 spaces, which IMO is a mistake. The sensible thing to do if you have a whitespace-sensitive language that accepts both spaces in tabs is to make them incomparable to each other; i.e. snip I honestly think that tab characters occurring anywhere but in a comment should be considered a lexical error and rejected by the compiler outright. More problems are caused by trying to continue with only tabs, or some mixture of tabs and spaces than just getting one's editor to expand tabs automatically. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskellwiki and Project Euler
Hello, It seems that I'm getting sucked into this argument solely due to my unwillingness to allow people to damage useful content that has been added to the Haskell wiki. This started a couple of weeks ago when a user by the name Marypoppins decided to arbitrarily remove all the Euler Problems solutions from the wiki. I treated this as vandalism and immediately reverted all the changes. I'd like to state up front that I otherwise have no personal stake in this, since the solutions pages are not ones that I've made significant contributions to, nor have I even spent a significant amount of time working on Project Euler problems. (They have not enough universal quantifiers in them for my tastes.) I do however, think it's important to not allow valid contributions to the wiki to be damaged by people without good reason. On 23/02/2008, Daniel Fischer [EMAIL PROTECTED] wrote: Hi all, I try not to be too rude, although I'm rather disgusted. I know there are several sites out on the web where solutions to PE problems are given. That is of course absolutely against the sporting spirit of Project Euler, but hey, not all people are sporting. I've found http://www.haskell.org/haskellwiki/Euler_problems irritating for a while, but wasn't overly annoyed by it while it only contained code for solving a few dozen problems. Today I learnt that it now contains code for all problems. Really bad! Why is this even the least bit bad? If you publish a bunch of problems, expect people to publish a bunch of solutions to them. They will do this regardless of what you demand, since there's educational value to others in doing so. If you're running a contest and you don't want people to be able to look up all the solutions, then simply produce a bunch of problems to which nobody has the solution, and make them available all at once, with a time limit on solving them. If you want to see how this is done correctly, have a look at what the ICFP does. If Project Euler is instead, not a contest, as people on the Talk pages on the wiki have claimed, then nobody should have any problem with publishing solutions, as the only person one could possibly cheat by looking up the solution is oneself. However, if one had already given up on solving said problem, then there would likely be significant educational value in reading a solution to it. On top of that, the code for many problems isn't even Haskell, but C, WTF! This indeed is a problem, as it is the Haskell wiki after all. However, I feel that it's more valuable to keep such solutions until such time as their Haskell counterparts are made available. Other code was submitted without consent of the author, copied from the PE fora, which are restricted access and so, even if perhaps not legally, but in spirit, do not fall under the legitimate resources for haskellwiki: You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. DO NOT SUBMIT COPYRIGHTED WORK WITHOUT PERMISSION! This is a legitimate concern. If the copyright of the original authors can be proved, said solutions should indeed be removed. However, any claim that the content as a whole, or the list of numeric solutions violates the copyright of PE is clearly ridiculous. The problem statements do not appear on the wiki, and the exact solutions, even if PE were to publish them (that list doesn't appear to be anywhere on the PE site), clearly qualifies as fair use. To make matters worse still, there was a page containing nothing but the answers. That was changed, but Cale chose to reintroduce that crap. I just removed it again. Your turn, Cale. I will not tolerate people coming along and arbitrarily blanking pages for inappropriate reasons like this. Sorry. Such a list of solutions would be useful to someone working on the problems, as a fast way to check their solutions, for instance. It doesn't harm people wanting to solve the problems on their own, as they can simply avoid looking at it. I call on the Haskell community to vote for immediate removal of these pages from the wiki! Show that you're a sporting bunch. I call for the opposite. Sorry. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskellwiki and Project Euler
On 24/02/2008, Daniel Fischer [EMAIL PROTECTED] wrote: Am Sonntag, 24. Februar 2008 11:37 schrieb Cale Gibbard: This is a legitimate concern. If the copyright of the original authors can be proved, said solutions should indeed be removed. PE has a share-alike license, the very least to be demanded if someone posts other's code is proper attribution. To clarify, everything on the wiki is implicitly published under a simple permissive license which is avilable here: http://haskell.org/haskellwiki/HaskellWiki:Copyrights If code cannot be made available under that license, it indeed should be removed. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskellwiki and Project Euler
On 24/02/2008, Chaddaï Fouché [EMAIL PROTECTED] wrote: 2008/2/24, Rodrigo Queiro [EMAIL PROTECTED]: That said, I vote to keep the solutions (providing they are written by the page editor) since IMO they do no harm. I agree with this (I personally contributed some small solutions and tried to at least comment them a minimum), but I noticed a smartass replaced valid solutions by references to the sequences dictionary... As the goal of PE is to solve the question by program (at least in my understanding), I feel this qualify as vandalism... especially as the program replaced whatever their value were in Haskell whereas the sequence dictionary has no relation to Haskell. I encourage you to put your solutions back up, that would be good. Referencing OEIS is a bit of a cheesy way to do things. (Though if it's going to be done, one could at least make use of the excellent Math.OEIS library :) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Graphical graph reduction
Is there any place that can we download the HOPS program itself? It unfortunately doesn't seem available from that page. On 24/02/2008, Jacques Carette [EMAIL PROTECTED] wrote: I think HOPS is what you are looking for http://www.cas.mcmaster.ca/~kahl/HOPS/ It may advertize itself otherwise, but the main thing you 'see' when running programs in fully explicit mode is exactly all the graph reductions. Jacques ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Doubting Haskell
(I'm copying the list on this, since my reply contains a tutorial which might be of use to other beginners.) On 19/02/2008, Alan Carter [EMAIL PROTECTED] wrote: Hi Cale, On Feb 19, 2008 3:48 PM, Cale Gibbard [EMAIL PROTECTED] wrote: Just checking up, since you haven't replied on the list. Was my information useful? Did I miss any questions you might have had? If you'd like, I posted some examples of using catch here: Thanks for your enquiry! My experiment continues. I did put a progress report on the list - your examples together with a similar long an short pair got me over the file opening problem, and taught me some things about active whitespace :-) I couldn't get withFile working (says out of scope, maybe 'cos I'm ghc 6.6 on my Mac) Make sure to put: import System.IO at the top of your source file, if you haven't been. This should import everything documented here: http://www.haskell.org/ghc/docs/latest/html/libraries/base/System-IO.html but it turned out the line I was looking for (collapsed from the examples) was: text - readFile data.txt `catch` \_ - return This ensures the program never loses control, crashing or becoming unpredictable by attempting to use an invalid resource, by yielding an empty String if for any reason the file read fails. Then an empty String makes it very quickly through parsing. I guess that's quite functiony :-) Amazing how easy once I knew how. Even stranger that I couldn't find a bread and butter example of it. Then I was going very quickly for a while. My file is dumped from a WordPress MySql table. Well formed lines have 4 tab separated fields (I'm using pipes for tabs here): line id | record id | property | value Line IDs are unique and don't matter. All lines with the same record ID give a value to a property in the same record, similar to this: 1|1|name|arthur 2|1|quest|seek holy grail 3|1|colour|blue 4|2|name|robin 5|2|quest|run away 6|2|colour|yellow Organizing that was a joy. It took minutes: let cutUp = tail (filter (\fields - (length fields) == 4) (map (\x - split x '\t') (lines text))) This should almost certainly be a function of text: cutUp text = tail (filter (\fields - (length fields) == 4) (map (\x - split x '\t') (lines text))) I found a split on someone's blog (looking for a library tokenizer), but I can understand it just fine. I even get to chuck out ill-formed lines and remove the very first (which contains MySql column names) on the way through! Sadly, there's no general library function for doing this. We have words and lines (and words would work here, if your fields never have spaces), but nobody's bothered to put anything more general for simple splitting into the base libraries (though I'm sure there's plenty on hackage -- MissingH has a Data.String.Utils module which contains split and a bunch of others, for example). However, for anything more complicated, there are also libraries like Parsec, which are generally really effective, so I highly recommend looking at that at some point. I then made a record to put things in, and wrote some lines to play with it (these are the real property names): data Entry = Entry { occupation:: String , iEnjoyMyJob :: Int , myJobIsWellDefined:: Int , myCoworkersAreCooperative :: Int , myWorkplaceIsStressful:: Int , myJobIsStressful :: Int , moraleIsGoodWhereIWork:: Int , iGetFrustratedAtWork :: Int } ... let e = Entry{occupation = , iEnjoyMyJob = 0} let f = e {occupation = alan} let g = f {iEnjoyMyJob = 47} putStrLn ((occupation g) ++ ++ (show (iEnjoyMyJob g))) Then I ran into another quagmire. I think I have to use Data.Map to build a collection of records keyed by record id, and fill them in by working through the list of 4 item lists called cutUp. As with the file opening problem I can find a few examples that convert a list of tuples to a Data.Map, one to one. I found a very complex example that convinced me a map from Int to a record is possible, but gave me no understanding of how to do it. I spent a while trying to use foldl before I decided it can't be appropriate (I need to pass more values). So I tried a couple of recursive functions, something like: type Entries = M.Map Int Entry ... let entries = loadEntries cutUp ... loadEntries :: [[String]] - Entries loadEntries [] = M.empty Entries loadEntries [x : xs] = loadEntry (loadEntries xs) x -- Possible common beginner error here: [x:xs] means the list with one element which is a list whose first element is x and whose tail is xs. Your type signature and the type of cutUp seems to confirm that this is the right type, but you don't seem to have a case to handle a longer list of lists. If you want just a list with first entry x, and with tail xs, that's just (x:xs). If you want to handle lists
Re: [Haskell-cafe] Doubting Haskell
On 16/02/2008, Alan Carter [EMAIL PROTECTED] wrote: Greetings Haskellers, I'm still hoping that this is solvable. That the instinctive priorities of production programmers are just different to those of researchers, and in fact it *is* possible to open a file *and* read it, checking *both* error returns, without being driven insane. If so, I sincerely suggest an example or two, like the small but well formed programs in KR, Stroustrup or Gosling saying things like: if((fp = fopen(...)) != NULL) { if(fgets(...) != NULL) { printf(...); } fclose(...) } Best wishes - and still hoping I'm wrong after all Alan Carter Well, first of all, have you read the documentation for System.IO? http://www.haskell.org/ghc/docs/latest/html/libraries/base/System-IO.html That has all the corresponding functions you need. I'm not sure I understand completely how you managed to spend two weeks struggling with this before asking. Two minutes on #haskell, or a quick question about how to open and read a file should have got you a useful response. :) First, I'll write the program in a straightforward, but extremely explicit manner, handling possible errors and managing clean up explicitly. This code is rather verbose, so I'll then show some other less verbose ways to handle things while still maintaining safety. So, the first version: import System.IO import Control.Exception (try) main = do mfh - try (openFile myFile ReadMode) case mfh of Left err - do putStr Error opening file for reading: print err Right fh - do mline - try (hGetLine fh) case mline of Left err - do putStr Error reading line: print err hClose fh Right line - putStrLn (Read: ++ line) Okay, so this is hopefully fairly self-explanatory to a C-programmer. The only potentially-confusing part is the function 'try', imported from Control.Exception. What it does is to catch all possible exceptions, and reflect them through the return value of the action. If an exception is thrown, 'try' will catch it, and give us a value of the form (Left e), for e being the exception. If instead, the operation succeeds without an exception, we get a value (Right x), where x is the normal return value of the action. The successive 'case' expressions are used to pattern match on this, and handle the errors by printing out an explanatory message. Some example runs of this program: [EMAIL PROTECTED]:~$ rm myFile [EMAIL PROTECTED]:~$ ./read Error opening file for reading: myFile: openFile: does not exist (No such file or directory) [EMAIL PROTECTED]:~$ touch myFile [EMAIL PROTECTED]:~$ ./read Error reading line: myFile: hGetLine: end of file [EMAIL PROTECTED]:~$ echo hello myFile [EMAIL PROTECTED]:~$ ./read Read: hello This program actually does more error handling than your example C program, so let's tone it down a bit, and make use of some nice IO operations provided to handle errors and clean things up safely in the event of a failure. import System.IO main = withFile myFile ReadMode $ \fh - do line - hGetLine fh putStrLn (Read: ++ line) The function 'withFile' takes a filename, a mode in which to open the file, and a function, taking a filehandle, and giving an action to be performed with that handle, and wraps that action up inside an exception handler, which ensures that the file handle is safely closed if an exception is thrown. (This doesn't matter much in our small example, but I'm sure you'll appreciate how that's an important thing.) We don't handle the exceptions explicitly in this program, but we still could. There are a host of exception-handling mechanisms in Control.Exception, ranging from simple value-oriented things like try, to more explicit operations for wrapping things in exception handlers, like catch: catch :: IO a - (Exception - IO a) - IO a Or to get more selective: catchJust :: (Exception - Maybe b) - IO a - (b - IO a) - IO a Which takes a function that gets to decide whether to handle the exception, and at the same time, transform the exception in some way before passing it on to the exception handler. For more information about exceptions, check out the documentation for Control.Exception here: http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html I assure you that Haskell is a very reasonable programming language in which to write safe and correct programs. There are whole companies founded on writing high-assurance software in Haskell. If you have more questions, I would be happy to answer them, either here, or perhaps more comfortably, on IRC, #haskell on irc.freenode.net. It's a very beginner friendly channel, and asking questions there is a great way to learn the language quickly, and find the resources you need. Hope this
Re: [Haskell-cafe] Doubting Haskell
On 16/02/2008, Alan Carter [EMAIL PROTECTED] wrote: Then when all this was going on, question number five appeared: What the hell are these lightweight Haskell threads? Are they some kind of green threads running non-preemptively inside a single OS thread? Are they OS threads that could run concurrently on multi-core hardware? If they are green threads (and it sounds like they are) then this is all an academic exercise which has no production application yet. Best wishes - and still hoping I'm wrong after all Alan Carter Sorry for missing this question in my first response. The answer of course depends on the Haskell implementation in question, but of course, we're talking about GHC here. Haskell threads, in the sense of Control.Concurrent.forkIO, are essentially a sort of green thread which is scheduled by the Haskell runtime system. Threads can either be bound to a particular OS thread, or (as is default), not be bound to a particular OS thread, allowing the scheduler to manage n Haskell threads with m OS threads, where usually you want to set m to something like the number of processors in your machine. I'm a little hazy on the details, and perhaps someone more familiar with the GHC runtime can fill in some more details for you if you'd like. Aside from Concurrent Haskell (which was originally designed for single-processor concurrency and later extended to allow for scheduling threads to execute in multiple OS threads), there is Parallel Haskell, which is used to annotate pure computations for parallelism (but since they're pure, there is no concurrency). At its core, Parallel Haskell has an extremely simple programmer interface: par :: a - b - b Evaluation of an expression of the form (par x y) will cause x to be put in a queue of expressions to be evaluated by a worker on some OS thread, if there is free time, before resulting in y. If there is no time to evaluate x on some processor before it is eventually needed, then evaluation just proceeds normally, but if there is, then it won't need evaluation later, due to the usual sharing from lazy evaluation. From this extremely simple form of parallel annotation, it's possible to build lots of interesting mechanisms for carrying out evaluation in parallel. You can read more about that in a paper titled Algorithm + Strategy = Parallelism by PW Trinder, K Hammond, H-W Loidl and Simon Peyton Jones, or check out the documentation for Control.Parallel.Strategies. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Where does ~ come from?
On 17/02/2008, Steve Lihn [EMAIL PROTECTED] wrote: While I am reading TypeCompose module, I am having trouble finding where ~ comes from? (And its son ~~ and grandson ~~~, etc.) This is my immediate question. Help is appreciated. (~) is typically an infix type variable. If it were a constructor, it would have to start with a colon. So it doesn't have a definition as such, you can think of it as any type constructor with at least two type parameters. I think that would explain why you were having such trouble searching for it! Generally, though, if you want to know where something is defined, you can use ghci's :info command, and it should tell you the file and line in which it's defined, as well as its type (if it's a value) and its fixity (if it's an infix operator). You can then load up whatever documentation or source code is available for that module. Of course, in this case, it would report (correctly) that (~) is not in scope. Usually, packages on Hackage have whatever documentation is available linked directly from their pages on Hackage. hope this helps! - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A handy little consequence of the Cont monad
On 04/02/2008, Philip Armstrong [EMAIL PROTECTED] wrote: I've always liked $ for this kind of code, if you want to keep the arguments around: next xs = runCont $ sequence $ map Cont xs seems quite natural to me. I'd probably write that as nest xs = runCont . sequence . map Cont $ xs or else as: nest xs = runCont . sequence $ map Cont xs so as not to abuse the fact that ($) really has the wrong associativity. (I didn't really give that aspect of the code a moment's thought, or else I'd probably have made it either points-free or used the first form above. I've been bitten by the MR enough times that I'm wary of eta-reducing the last parameter out of functions -- of course, the explicit type signature means it doesn't matter.) It would be nice to flip the associativity of ($) someday. It loses little in the way of expressiveness, since one can generally replace the first (n-1) instances of ($) with (.) straightforwardly (the one exception to this being when there are other operator symbols like (***) acting on the functions involved, but these cases are fairly rare, and it's arguably clearer to leave those parens in). What it would buy us to make ($) left associative is that we could, for instance, remove the parens from an expression like: f (g x) (h y) (k z) getting: f $ g x $ h y $ k z Perhaps for Haskell 2. :) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] A handy little consequence of the Cont monad
Hello, Today on #haskell, resiak was asking about a clean way to write the function which allocates an array of CStrings using withCString and withArray0 to produce a new with* style function. I came up with the following: nest :: [(r - a) - a] - ([r] - a) - a nest xs = runCont (sequence (map Cont xs)) withCStringArray0 :: [String] - (Ptr CString - IO a) - IO a withCStringArray0 strings act = nest (map withCString strings) (\rs - withArray0 nullPtr rs act) Originally, I'd written nest without using the Cont monad, which was a bit of a mess by comparison, then noticed that its type was quite suggestive. Clearly, it would be more generally useful whenever you have a bunch of with-style functions for managing the allocation of resources, and would like to turn them into a single with-style function providing a list of the acquired resources. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to add ENV variable in runhaskell shell script
On 13/01/2008, Steve Lihn [EMAIL PROTECTED] wrote: Hi, In perl scripts (unix), one can do == #!/usr/local/bin/perl BEGIN { $ENV{LD_LIBRARY_PATH} = ...; } do my perl stuff == The setting of ENV variable before the program runs allows me to fix the shell environments. In this case, I need to specify where to look for the shared library. How do I do this in Haskell if I am writing the same script using: == #!/path/to/runhaskell {- how to set LD_LIBRARY_PATH = ... ? -} do my haskell stuff == Thanks, Steve In System.Posix.Env there is a function setEnv which will let you set environment variables. I have no idea why this functionality is not exposed in System.Environment. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Solving a geometry problem with Haskell
On 12/01/2008, Hugh Perkins [EMAIL PROTECTED] wrote: On Jan 12, 2008 10:19 PM, Rafael Almeida [EMAIL PROTECTED] wrote: After some profiling I found out that about 94% of the execution time is spent in the ``isPerfectSquare'' function. I guess that Haskell's referential transparence means the answers to the isPerfectSquare will be cached, ie automatically memoized? (not sure if is correct term?) This isn't true unless calling isPerfectSquare reads the result out of a lazily generated array of responses or some other data structure at the top-level. There's no memoisation of functions by default because it would typically require ridiculous amounts of memory, however, you can usually rely on values bound to variables to only be computed once, barring those which are typeclass polymorphic. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] GHC 6.8.1 Documentation
Hello all, Recently I noticed that all of my bookmarks to the hierarchical libraries documentation broke, and I'm not entirely happy with the solution of just correcting all the links to point at the new URLs since the URLs all have package version numbers in their names, which means that I'll have to update them all over again next time. Is there any chance we could get stable URLs for the latest version of each module somewhere? (Perhaps via symlinks on the webserver?) I also noticed that the links to the source code seem to be missing from all the 6.8.1 Haddock docs now, which is something that I used quite a lot in the past. It'd be nice to have those back. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Standalone PNG module?
On 06/11/2007, Peter Verswyvelen [EMAIL PROTECTED] wrote: I would like to load 32-bit images (RGB+alpha) for use with GLUT/OpenGL. I know GTK2HS has support for loading images, but does a standalone Haskell (wrapper) module exists for loading images? PNG or TGA would be enough for me. The Imlib2 binding (called Imlib-0.1.0) on Hackage provides the necessary tools to load and manipulate images in a variety of formats including PNG, but it's under-documented and a little rough and untested. I'm currently working on a new version with proper documentation, a number of bugfixes, and a somewhat more Haskellish interface. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Best Linux for Haskell?
On 06/11/2007, Tim Docker [EMAIL PROTECTED] wrote: I can confirm that ghc-6.8.1 builds from source completely without fuss on the latest ubuntu (7.10). (... though it took a couple of hours of cpu time :-) Tim I can confirm that the generic x86 binary package for 6.8.1 works on Ubuntu 7.10, provided that you use alien to install the readline-compat RPM which is provided on the same page. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Hiding side effects in a data structure
On 21/10/2007, Jon Fairbairn [EMAIL PROTECTED] wrote: No, they (or at least links to them) typically are that bad! Mind you, as far as fragment identification is concerned, so are a lot of html pages. But even if the links do have fragment ids, pdfs still impose a significant overhead: I don't want stuff swapped out just so that I can run a pdf viewer; a web browser uses up enough resources as it is. And will Hoogle link into pdfs? Swapped out!? What PDF viewer are you running on what machine? Currently, with a 552 page book open (Hatcher's algebraic topology), my PDF viewer (Evince) uses about 36MiB, which is around 3.6% of my available memory, a rather pedestrian 1 GiB. Other documents produce very similar results. The largest I was able to make it with a PDF which wasn't pathologically constructed was about 42MiB, with a PDF that had lots of diagrams. Firefox uses about twice that on an average day. If your PDF viewer uses significantly more than that, I recommend looking for a new one. ;) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Math.OEIS 0.1
Uh, not quite. My code essentially treats mathematica as a function on strings, (and involves some additional shell wrapping), since there's essentially no processing to do on the Haskell side there. If you wanted to do it right, you'd probably want to write an FFI binding to MathLink. On 22/10/2007, Stefan O'Rear [EMAIL PROTECTED] wrote: On Mon, Oct 22, 2007 at 07:20:47AM -0400, Brent Yorgey wrote: * returning a lazy infinite list for infinite sequences via an embedded general AI and Mathematica interpreter. Assuming you have a licensed copy of Mathematica, get in touch with Cale Gibbard; he has done all the work for interfacing Mathematica with Haskell (cf 'mbot' on freenode). Stefan -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHHNziFBz7OZ2P+dIRAlPhAJ9HuSDs4Zo40eZfTg2pvx8yo/16ZgCgnQBF Y8QCAH6h8S+Txj7CflXZaK0= =5q11 -END PGP SIGNATURE- ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Math.OEIS 0.1
On 22/10/2007, Brent Yorgey [EMAIL PROTECTED] wrote: Now, if the OEIS stored *Haskell* code to generate sequences... -Brent Yeah, that would be rather nice. :) I think I have seen Haskell code on a few of the entries, even, but I don't know how standard the format is. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: [Haskell] Math behind Haskell
On 23/09/2007, Neil Mitchell [EMAIL PROTECTED] wrote: Hi The haskell-cafe@ mailing list is more appropriate for messages such as this. haskell@ is just for announcements (it should be called haskell-annouce@ !) * Lambda calculus - the basis of functional languages * Category theory - where all these mysterious things like monads, arrows, and functors come from. I'd add: * Discrete Maths - booleans, relations, functions etc. * Type theory * Logic programming (Prolog) * Semantics Although it might technically be covered by Discrete Maths, I'd like to add algebraic and enumerative combinatorics to the list. That is, the sort of mathematics where one deals with bijective decompositions of combinatorial structures and generating series (generating functions). The overall picture is that you put collections of discrete structures in correspondence with algebraic objects of some sort, typically the elements of a suitable ring, and make use of that correspondence to move results back and forth. I've found that a lot of the recent stuff like the theory of zippers and differentiation of data structures directly reflects the sort of combinatorial operations which go on in algebraic combinatorics. (See, for instance, Combinatorial Enumeration, by Jackson and Goulden, or Joyal's combinatorial species.) No doubt there are more ideas which could be fruitfully imported from there. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] monads and groups -- instead of loops
On 01/08/07, Greg Meredith [EMAIL PROTECTED] wrote: Haskellians, Though the actual metaphor in the monads-via-loops doesn't seem to fly with this audience, i like the spirit of the communication and the implicit challenge: find a pithy slogan that -- for a particular audience, like imperative programmers -- serves to uncover the essence of the notion. i can't really address that audience as my first real exposure to programming was scheme and i moved into concurrency and reflection after that and only ever used imperative languages as means to an end. That said, i think i found another metaphor that summarizes the notion for me. In the same way that the group axioms organize notions of symmetry, including addition, multiplication, reflections, translations, rotations, ... the monad(ic axioms) organize(s) notions of snapshot (return) and update (bind), including state, i/o, control, In short group : symmetry :: monad : update Best wishes, --greg Hello, I just wrote http://www.haskell.org/haskellwiki/Monads_as_computation after starting to reply to this thread and then getting sidetracked into writing a monad tutorial based on the approach I've been taking in the ad-hoc tutorials I've been giving on #haskell lately. :) It might be worth sifting through in order to determine an anthem for monads. Something along the lines of: monads are just a specific kind of {embedded domain specific language, combinator library} would work, provided that the person you're talking to knows what one of those is. :) I've found it very effective to explain those in general and then explain what a monad is in terms of them. I'm not certain that the article is completely finished. I'm a bit tired at the moment, and will probably return to polish the treatment some more when I'm more awake, but I think it's finished enough to usefully get the main ideas across. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] infinite list of random elements
On 30/07/07, Chad Scherrer [EMAIL PROTECTED] wrote: I'm trying to do something I thought would be pretty simple, but it's giving me trouble. Given a list, say [1,2,3], I'd like to be able to generate an infinite list of random elements from that list, in this case maybe [1,2,1,3,2,1,3,2,3,1,2,...]. I'm using IO for random purely due to laziness (my own, not Haskell's). I was thinking the best way to do this might be to first write this function: randomElts :: [a] - [IO a] randomElts [] = [] randomElts [x] = repeat (return x) randomElts xs = repeat r where bds = (1, length xs) xArr = listArray bds xs r = do i - randomRIO bds return (xArr ! i) Then I should be able to do this in ghci: sequence . take 5 $ randomElts [1,2,3] [*** Exception: stack overflow Any idea what's going on? I thought laziness (Haskell's, not my own) would save me on this one. I don't get that result. However, you can't compute an infinite random list in IO without using something like unsafeInterleaveIO. However, you will probably be interested in randoms/randomRs, which take a random generator, and give an infinite list of results. Using that, we could write something like: randomElts :: [a] - IO [a] randomElts [] = return [] randomElts xs = do g - newStdGen return (map (xArr !) (randomRs bds g)) where bds = (1, length xs) xArr = listArray bds xs which for a nonempty input list, gives an infinite list of pseudorandom elements of that input list. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Very freaky
On 10/07/07, Andrew Coppin [EMAIL PROTECTED] wrote: Stefan O'Rear wrote: Another good example: foo :: ∀ pred : Nat → Prop . (∀ n:Nat . pred n → pred (n + 1)) → pred 0 → ∀ n : Nat . pred n x_x Which you can read as For all statements about natural numbers, if the statement applies to 0, and if it applies to a number it applies to the next number, then it applies to all numbers.. IE, mathematical induction. ...and to think the idea of mathematical symbols is to make things *clearer*... As someone with a background in mathematics, I'd say that the idea of mathematical symbols is to make things more concise, and easier to manipulate mechanically. I'm not entirely certain that their intent is to make things clearer, though often they can make things more precise (which is a bit of a double edged sword when it comes to clarity). I quite often try to avoid symbols as much as possible, only switching to formulas when the argument I'm making is very mechanical or computational. After all, in most circumstances, the reader is going to have to translate the symbols back into concepts and images in their head, and usually natural language is a little farther along in that process, making things easier to read. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: How Albus Dumbledore would sell Haskell
On 21/04/07, apfelmus [EMAIL PROTECTED] wrote: Dan Weston wrote: -- Why is this not in Prelude? dup x = (x,x) It is (almost). It's called join (,) It's unfortunate that the Monad instance for ((-) e) isn't in the Prelude. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Did quickchekc get dropped from ghc from 6.4 to 6.6?
On 26/02/07, Thomas Hartman [EMAIL PROTECTED] wrote: According to http://www.cs.chalmers.se/~rjmh/QuickCheck/ Quickcheck is distributed with ghc. I seem to recall this came with ghc 6.4. After upgrading to ghc 6.6, however, I don't seem to have it anymore. Hmm, which release do you have? I don't think that I had to do anything special to get QuickCheck. I just have the generic Linux binary distribution. If you have a distribution-specific package, check that there isn't another one for QuickCheck. Alternately, yeah, you can just get it from Hackage like you described. [EMAIL PROTECTED]:~$ ghci ___ ___ _ / _ \ /\ /\/ __(_) / /_\// /_/ / / | | GHC Interactive, version 6.6, for Haskell 98. / /_\\/ __ / /___| | http://www.haskell.org/ghc/ \/\/ /_/\/|_| Type :? for help. Loading package base ... linking ... done. Prelude :m + Test.QuickCheck Prelude Test.QuickCheck ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Looking for documentation on Control.Parallel.Strategies
On 16/02/07, Pepe Iborra [EMAIL PROTECTED] wrote: There is also an excellent paper in tutorial style which imho is very useful to understand the interaction of lazyness with the Control.Parallel.Strategies combinators: Algorithm + Strategy = Parallelism Philip W. Trinder, Kevin Hammond, Hans-Wolfgang Loidl, and Simon L. Peyton Jones. Journal of Functional Programming, 8(1), Jan 1998. http://www.macs.hw.ac.uk/~dsg/gph/papers/abstracts/strategies.html Indeed, it would be good if the new Haddock linked to this as well. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How did you stumble on Haskell?
On 28/01/07, Alexy Khrabrov [EMAIL PROTECTED] wrote: How do people stumble on Haskell? I got referred to Haskell by an acquaintance when I happened to mention that I was interested in algebraic approaches to music theory. He referred me to Haskore. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Importance of MonadRandom
The splittable idea isn't mine, it looks like perhaps Remi Turk did it. One thing which I'd recommend is including getRandom, getRandomR as implemented in terms of getR and the ordinary random operations, simply because they're the two most common uses, it's probably worth it to define them separately. Also, getRandoms and getRandomRs would still be convenient to have in the library, and should do appropriate splitting of the generator. - Cale On 06/02/07, Yitzchak Gale [EMAIL PROTECTED] wrote: I wrote: Cale Gibbard's MonadRandom... I would like to suggest a change to the interface... class (Monad m) = MonadRandom m where nextR :: m Int splitR :: m (m ()) rangeR :: m (Int, Int) getR :: (forall g . RandomGen g = g - a) - m a I see that I have inadvertently done two things differently than Cale with regard to split: Cale used a different type, and he put it into a separate monad. The separate monad idea is a very good one. My type is bit more general than Cale's, and it emphasizes the amusing fact that split is a kind of inverse to monadic join. (Actually, a section.) But Cale's type looks more convenient to use. I am modifying my proposal accordingly on both points. Below are the new versions of the classes. Any comments? Thanks, Yitz \begin{code} class Monad m = MonadRandom m where nextR :: m Int rangeR :: m (Int, Int) getR :: (forall g . RandomGen g = g - a) - m a -- Minimum complete definition: nextR and rangeR getR f = do r - nextR (lo, hi) - rangeR return $ f $ TrivalGen r lo hi class MonadRandom m = MonadRandomSplittable m where splitR :: m a - m (m a) splitRandom :: m a - m a -- Use the following default method definitions only -- when splitting is a trivial operation, such as for -- hardware-based random generators. splitR = return splitRandom = id instance Monad m = MonadRandomSplittable (RandT m) where splitR x = RandT (StateT split) = return . ( x) . RandT . put splitRandom x = RandT (StateT split) = lift . evalRandT x instance MonadRandomSplittable Rand where splitR = liftState split = return . liftState . put splitRandom x = Rand (State split) = return . evalRand x \end{code} ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Levels of recursion
On 31/01/07, Andrew Wagner [EMAIL PROTECTED] wrote: Like I said, I'm familiar with map, foldr, etc. But this is the first time it's dawned on me that I need to think in more general recursive patterns like this instead of simple, raw recursion. That map and foldr aren't JUST a nice way of doing things to a little list, but they are recursive operators, building blocks that I need to use as fluently as anything else. I suspect I'm not alone, though of course I could be wrong. But I would bet that a significant difference between newbies and more experienced Haskell programmers is the degree to which they can think in this composition/HOF way. Where the beginner wants to program using raw, simple recursion, the more experienced sees an opportunity to apply an abstraction. So, a couple of questions to ponder about this: Is this unique to Haskell, or could the same be said about any functional language? How can we teach this better to newbies? Most of what I see in the tutorials is Higher order functions accept as parameters and/or return other functions. Here's some examples: explanation of map, explanation of foldr. Moving on, ... But clearly, this is something important, and I think we can do a better job of teaching it. Suggestions? This is absolutely correct. I suppose I was lucky in that it was probably my first epiphany of functional programming, and came before I'd really learned much Haskell at all. I somehow determined that working on understanding the translation from loops in the imperative programming I knew to map, filter, zip, fold and so on was going to be important, which helped a lot. I had a good time taking things that way, and I certainly think it should be a key point near the beginning of any functional programming tutorial. Higher order functions, and higher-order list processing functions in particular, are the control structures of functional programming. They stick everything else together. They are every bit as important as loops are to the imperative programmer. They can also be more natural ways of thinking about how things fit together than the imperative approach offers. My friend, who'd had a bit of C++ and a bit of Java, (and had pretty much hated it), preferred the Haskell approach. He gave me the example that map wash dishes is a whole lot closer to what you're thinking when washing dishes than numbering the dishes and incrementing a counter, washing dish n at each step. This approach to functional programming is especially important in a lazy language, since it works so well. Lists are, in a sense, loops which haven't yet happened, and much of programming is done by transforming them. Due to laziness, those elements not needed won't be computed, which is a similar capability as that of being able to break out of a loop early. This inversion allows you to essentially extend the code which would be in the loop body after the fact (that is, without modifying the existing code), which is not something you could do in a strict imperative language unless you'd specifically provided for passing in a continuation in the loop, or were modelling lazy lists somehow. Eventually, you come to mostly forget about loops and just think in terms of the lists themselves, but it's a very useful alternate view to have handy. Of course, lists aren't the only data structure just as loops aren't the only kind of recursion. Laziness basically makes data structures into tools giving you the ability to reify recursion such that only those steps which end up needed later on are actually carried out. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] newbie timing question
On 26/01/07, Sean McLaughlin [EMAIL PROTECTED] wrote: Hello, I'm trying to write a simple function to time an application. -- this doesn't work time f x = do n1 - CPUTime.getCPUTime let res = f x in do n2 - CPUTime.getCPUTime return (res,n2 - n1) On a function that takes 8 seconds to complete, returns (True,4600) According to the documentation, this time is in picoseconds, making this 46 microseconds. The statement let res = f x on its own causes essentially no computation to occur. In your program, res is only getting computed when the result of that do-block is finally *printed*. Before that, it's just a pointer to some code which has yet to run. There's no reason to compute res there, and so it isn't computed. Note that the amount of time spent computing res might depend on just how much of it is ever used by the program. Consider the case where res is an infinite list, for example. One easy solution would be to print the value of res between the timers, though that will include the time for show and IO of course. Another is to use Control.Exception.evaluate to force the evaluation of res to occur in sequence with the other IO actions. Be aware that if res is compound data, you may need to sequence the computation of each of the components of res as well, as Control.Exception.evaluate will only cause evaluation to occur to the point of determining the top-level constructor (Weak Head Normal Form). In order to control evaluation, there is a primitive in Haskell called seq with the behaviour that evaluating seq x y will cause x to become evaluated to WHNF before resulting in y. Note that this only occurs when the seq itself is evaluated. Writing seq x x instead of x is redundant, as evaluation is being forced anyway, by the time the seq is seen. Using this, one can write functions which traverse datastructures and cause them to be fully evaluated as soon as any part of them is needed. strictPair (x,y) = x `seq` y `seq` (x,y) strictList xs = foldr seq xs xs As a generalisation of this idea, in the module Control.Parallel.Strategies, there is a class called NFData, with the function rnf :: (NFData a) = a - () (note that the type doesn't really capture the evaluation side-effect -- when the empty tuple returned is evaluated, the whole structure passed to rnf will be forced.). Using this, we could write: strictList xs = rnf xs `seq` xs Anyway, that's the quick intro to subverting Haskell's lazy evaluation. Before I go, there's another rather more honest way to do this sort of thing, provided that you're more interested in the timing for your own sake rather than having it be a value that your program is interested in. GHC has a fairly extensive profiler which will produce beautifully detailed reports on just how much time and memory were spent on each part of your program, and it will even let you tag subexpressions to watch and so on. For that, read http://www.haskell.org/ghc/docs/latest/html/users_guide/profiling.html Or if you're impatient, just build your program with the compiler options -prof -auto-all, and then run the resulting executable with +RTS -p -RTS on the commandline. It'll produce a profiling report which will show the relative amount of time spent in each part of the program, or +RTS -P -RTS, which will report the number of ticks and actual memory used in addition to the percentages. Hope this helps, - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] mapTuple
On 12/01/07, Grady Lemoine [EMAIL PROTECTED] wrote: Is there anything in particular you're trying to accomplish? It seems like this is the type of thing you'd accomplish with typeclasses if you had a less general problem than you've presented. For example, mapShowTuple :: (Show a, Show b) = (a, b) - (String, String) mapShowTuple (x, y) = (show x, show y) That said, it would be nice do be able to do something a little more general, but still with a specific typeclass, like mapNumOpTuple :: (Num a, Num b) = (Num c = c - c) - (a, b) - (a, b) mapNumOpTuple f (x, y) = (f x, f y) (This unfortunately fails to typecheck; GHC chokes with Illegal polymorphic or qualified type. On the other hand, I'm still pretty new to Haskell myself, so maybe someone else knows how to write this correctly without doing complex type hackery.) It's close: {-# OPTIONS_GHC -fglasgow-exts #-} mapNumOpPair :: (Num a, Num b) = (forall c. Num c = c - c) - (a, b) - (a,b) mapNumOpPair f (x,y) = (f x, f y) It would also be nice to be able to generalize over all typeclasses, e.g. (pseudo-code here) mapTypeclassOpTuple :: for all typeclasses C ((C a, C b) = (C c = c - c) - (a, b) - (a, b)) but I don't even know how that would fit into Haskell's syntax. I suspect it's an idea that's been discussed, and I just don't know the term for it. That's an interesting idea, typeclass variables. It would require a bit of a kind system for typeclasses, but that's not so hard. I somehow doubt it would get used all *that* much though. Can anyone think of a clever way to apply this? - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Efficient way to break up a lazy bytestring
On 30/12/06, Ranjan Bagchi [EMAIL PROTECTED] wrote: I guess the consumer's really important (Didn't even occur to me, I was concentrating on how I was generating the list). I was trying to de-lazy the list, I did the following: bs = [...] recs' = (take 100) breakUp bs recs = foldr seq recs' recs' print $ length recs Would the fold be blowing the stack? Ranjan Is there a really good reason to de-lazy the list? That's usually a bad idea if the list is long. If your application can do any kind of stream processing at all, then it's a better idea to allow it to work with the list lazily. If it can't, and you're basically producing some kind of summary of the input data, you're probably better off using a strict left fold and consuming the list in a strict fashion, but not forcing it earlier than actually needed. By strictifying the list, you're making it impossible to work with the first cons without computing the last one. That means that the following consumption of the list by length can't even start until it's finished building the whole thing in memory (you're wasting both time and space). Holding a large list like that in memory all at once has few benefits. If you're currently doing lazy IO and are concerned about the file changing as you're processing it, you're better off copying the whole file into memory as one big block and still building the list lazily from that copy in memory. If you're coming from any sort of imperative background, you can basically think of a list as a loop which has not yet happened. If you can generate the indices for the loop on the fly, you're probably not going to want to waste time putting them all in memory beforehand, regardless of what that loop is doing (how the list is being consumed). There are a few cases where strictifying production of data is good, but it's far more common to want strict consumption and lazy production. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Literate Haskell source files. How do I turn them into something I can read?
On 30/12/06, Michael T. Richter [EMAIL PROTECTED] wrote: I'm trying to wrap my mind around the darcs source code as a preliminary to looking into GHC's guts. All of darcs is written as .lhs files which have bizarre mark-up in them which distracts me from the actual Haskell source I'm trying to figure out and get used to. Apparently the GHC compiler can take .lhs files, strip them with unlit (a utility which I finally found buried deep in the GHC installation -- off-path) and then compile them normally. The problem I have is that unlit leaves behind instead these huge gaping (and highly distracting) stretches of whitespace while it takes out the markup. Are there any tools which I can use to render .lhs files readable? I'm fine with having them converted into documented source (i.e. source code embedded in documentation) or as pure Haskell source (but without the huge whitespace gaps) -- but I can't figure out how to get either. Assuming that it's LaTeX-based literate source, you usually run pdflatex on it to get a pdf of the code, but I'm not familiar with the darcs code in particular, and whether anything special needs to be done, or whether they have a specialised build for that. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] what are the points in pointsfree?
On 15/12/06, Scott Brickner [EMAIL PROTECTED] wrote: Donald Bruce Stewart wrote: sdowney: i'm not naive enough to think they are the composition function, and i've gathered it has something to do with free terms, but beyond that i'm not sure. unless it also has something to do with fix points? The wiki knows all! :) http://haskell.org/haskellwiki/Pointfree 1 But pointfree has more points! A common misconception is that the 'points' of pointfree style are the (.) operator (function composition, as an ASCII symbol), which uses the same identifier as the decimal point. This is wrong. The term originated in topology, a branch of mathematics which works with spaces composed of points, and functions between those spaces. So a 'points-free' definition of a function is one which does not explicitly mention the points (values) of the space on which the function acts. In Haskell, our 'space' is some type, and 'points' are values. Hm. I've been lurking for a while, and this might be a bit of nit-picking as my first post, especially given I'm still a bit of a n00b in Haskell. I've been programming a long time, though - coming up on three decades now and virtually all of it really programming, no management. Anyway, as I understood it, the points were the terminal objects of the category in which you're working - in this case, pointed continuous partial orders (CPO), and the points are effectively values in the domain. The usage of point for terminal objects comes from the category of topological spaces, as you say,. and algebraic topology is where category theory found it's first big home - but that's not really what we're talking about here, is it? The point that the wiki article is trying to make is that the term points-free was first used in the context of algebraic topology, and generalised quickly from there. This may have even been before people were making the generalisation from elements of a set with a topology on it to maps from a terminal object to the space in question. It's a bit of a coincidence that the theory which we're using to describe the semantics of programs is topological in nature, the term would likely have found use here without that. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] Building the community
On 14/12/06, Conrad Parker [EMAIL PROTECTED] wrote: What [hackers] are, unapologetically, is hostile to people who seem to be unwilling to think or to do their own homework before asking questions. People like that are time sinks — they take without giving back, and they waste time we could have spent on another question more interesting and another person more worthy of an answer. We call people like this losers (and for historical reasons we sometimes spell it lusers). I hope the Haskell community never adopts such an arrogant tone. I really agree with this. I think that one of the reasons that the Haskell community has grown to be so polite and helpful compared to many communities of comparable size is that kindness begets kindness. There's been a lot of chatter lately about how the Haskell community can grow faster than it has been. Personally, I would rather not have it grow any faster than we can manage. If there was a sudden influx of 1 brand-new Haskell users overnight (unattached to some pre-existing educational program), I think this community would quickly have a major problem answering all the mailing list and IRC traffic. If the community grows gradually, we'll easily be able to support it, because there will be a correspondingly larger number of experts and intermediate level users to help us out. Remember, if some significant factor of Haskell programmers advocate the language just to two of their friends, that's still exponential growth. There are also lots of other reasons why growing too quickly and gaining commercial users too quickly are double edged swords. Personally, I'd like to see the Prelude undergo a few more iterations, and it gets harder to change as more and more projects rely on it. When it does change, the maintenance cost of old code goes up, and not every project has an active maintainer. Popularity also results in large numbers of people who are invested in a particular way of doing things and are resistant to change. Haskell is a research language, and I'd personally like to see it remain a research language as its user-base grows. Granted, it would be better than the 30-year-old-at-heart programming languages people are using today, but I don't think I'd be truly satisfied if the Haskell which everyone used was not the Haskell which the researchers were working on with all the cool new ideas in it. I'm not saying don't advocate Haskell, I love it too, and I like seeing new users, but I think we need to take some care in maintaining our excellent community while we're at it. Advocating the language on a grassroots basis means that each new user gets a mentor, or if not one, then an equivalent-mentor spread across the community, and people who have been mentored in such a way tend to repay it many times over in helping other beginners. Having a tightly-knit community who have adopted the mindset of following language development closely will also help offset the costs of change to the language. There are lots of benefits to growing the community, I just think that these are things to consider before we take out a full-page ad in the New York Times. :) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] automonadization of code?
Well, there's the Haskell Array Preprocessor (http://www.cs.utah.edu/~hal/APP/index.html), but I've never really used it. I think the first thing to notice is that Control.Monad really does contain a lot of functions which are useful control structures. The way that you wrote that loop seems extremely awkward to me. How I'd write it would be something like: import Control.Monad import Data.Array.IO main = do a - (newArray (1,100) 1) :: IO (IOArray Int Int) forM [2..99] $ \i - do v - liftM2 (+) (readArray a (i-1)) (readArray a (i+1)) writeArray a i v print = getAssocs a (Note that forM = flip mapM is a recent addition to Control.Monad) It's possible to go quite a way to cleaning things up just using appropriate functions. On 12/12/06, Adam Megacz [EMAIL PROTECTED] wrote: Is there any work on automatic translation of code in some tiny imperative language into Haskell code that uses the ST and/or IO monads (or perhaps even pure functional code)? For example, the user writes something vaguely like array = newArray (1,100) 1 for x=2 to 99 array[x] := array[x-1]+array[x+1] And it is transformed into something like foldl (=) (newArray (1,100) 1) $ map (\n arr - do a - readArray arr (n-1) b - readArray arr (n+1) writeArray arr n (a+b) return arr) [2..99] Obviously the small imperative language would have to be highly restricted and carefully chosen in order for the translation to always work and be predictable. I'm interested in any existing work on choosing such a sublanguage. Thanks! - a -- PGP/GPG: 5C9F F366 C9CF 2145 E770 B1B8 EFB1 462D A146 C380 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Good Haskell introduction for an Ocaml programmer?
On 12/12/06, Brian Hurt [EMAIL PROTECTED] wrote: Greetings, all. I'm an experienced Ocaml programmer, looking to broaden my horizons yet further and pick up Haskell, and I'm wondering if there's a good introduction to Haskell for me. I think that O'Caml programmers would be one audience where I'd start with the Gentle Introduction (It's doesn't seem nearly gentle enough to the imperative-minded, but should be fine for someone used to functional programming). http://www.haskell.org/tutorial/ You're likely to find YAHT boring, but I think it may still be a good idea to skim it a bit. Also, you might like reading some specialised tutorials directly. http://haskell.org/haskellwiki/Introduction_to_IO -- a brief 5 minute introduction I wrote about how we think about IO in Haskell. My favourite monad tutorials: http://www.nomaware.com/monads/html/index.html -- This is really thorough and mostly very well written, though there are places where the examples are both highly contrived and hard to comprehend due to excessive use of the continuation monad. http://haskell.org/haskellwiki/Monads_as_containers -- this one I wrote myself, and takes a somewhat different approach to things than most others, treating monads as an abstraction of container types rather than of types of computation. These are just some of my personal recommendations, but there's a lot of stuff that's out there. There are some decent guides to what's available on the wiki: http://haskell.org/haskellwiki/Books_and_tutorials http://haskell.org/haskellwiki/Research_papers -- There's actually quite a lot of introductory material in papers, because writers often can't assume that their audience is intimately familiar with Haskell already, though they may assume a certain level of familiarity with CS in general. Also, make sure you fire up an IRC client and join us at #haskell on irc.freenode.org. There are lots of friendly people there who are always happy to discuss things, answer questions (don't worry if they seem simple), and point you at resources. Welcome to the list and have fun learning Haskell! - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Name that function =)
On 12/12/06, Louis J Scoras [EMAIL PROTECTED] wrote: What I didn't realize is that even though the documentation for Control.Monad says that (-(r)) is an instance, it must not be defined there. Importing Control.Monad.Reader indeed does make it work, thanks for pointing that out again. So I guess my next question would be, looking here http://haskell.org/ghc/docs/latest/html/libraries/base/Control-Monad.html How do I figure out that the instance declaration is actually in the Reader module? Is it just because the (-) constructor is special -- or just not hyperlinked? It's pretty subtle, but the Haddock there does list an instance of MonadReader r ((-) r). In newer GHC's the instance is also found in Control.Monad.Instances. In my opinion, it belongs in the Prelude. :) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cannot understand liftM2
On 11/12/06, Nicola Paolucci [EMAIL PROTECTED] wrote: Hi All, I'm loving learning Haskell quite a bit. It is stretching my brain but in a delightfull way. I've googled, I've hoogled but I haven't found a clear explanation for what exactly liftM2 does in the context below. Using the cool lambdabot pointless utility I found out that: \x - snd(x) - fst(x) is the same as: liftM2 (-) snd fst I like the elegance of this but I cannot reconcile it with its type. I can't understand it. I check the signature of liftM2 and I get: Prelude :t liftM2 Prelude liftM2 :: (Monad m) = (a1 - a2 - r) - m a1 - m a2 - m r Can someone help me understand what's happening here ? What does a Monad have to do with a simple subtraction ? What is actually the m of my example ? I am sure if I get this I'll be another step closer to illumination ... Thanks, Nick Hi Nick! The monad instance which is being used here is the instance for ((-) e) -- that is, functions from a fixed type e form a monad. So in this case: liftM2 :: (a1 - a2 - r) - (e - a1) - (e - a2) - (e - r) I bet you can guess what this does just by contemplating the type. (If it's not automatic, then it's good exercise) Now, why does it do that? Well, in general, liftM2 f x y = do u - x v - y return (f u v) So, it runs each of the computations you give it to get parameters for f, and then returns the result of applying f to them. In the ((-) e) monad, (which is often called the reader monad, because it's isomorphic to it), running a computation just means passing it the environment of type e. So in the reader monad, the environment is passed to each of x and y, to get u and v respectively, and then the value of (f u v) is returned. To translate, this is like: liftM2 f x y e = f (x e) (y e) of course, just for this particular monad. Another nice example is join. In general, join :: (Monad m) = m (m a) - m a join x = do y - x z - y return z or simply, join x = do y - x y In the reader monad, join has type (e - e - a) - (e - a), and it's somewhat obvious what it must be doing -- it must take the value of type e that it gets, and use it for both of the parameters of the function it gets in order to produce a value of type a. You can see by interpreting the do-notation that this is what happens in a curried way. First x is passed the environment, then its result (the partially applied function) is passed that environment. So, for instance, join (*) 5 will result in 25. The reader monad and functor instances are interesting, and worth exploring. There are some rather interesting idioms which can be obtained in this way. A nice one is: ap (,) f being the function (\x - (x, f x)), which is handy for mapping across lists of x-coordinates in making plots of functions. Let us know if you need more detail about anything here. I sort of skipped over some details in the presentation. (You might want to work out exactly what return and bind do in this monad in order to understand things completely -- you can work them out from the types alone.) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Cannot understand liftM2
On 11 Dec 2006 16:55:17 +, Jón Fairbairn [EMAIL PROTECTED] wrote: Nicola Paolucci [EMAIL PROTECTED] writes: Hi All, I'm loving learning Haskell quite a bit. It is stretching my brain but in a delightfull way. I've googled, I've hoogled but I haven't found a clear explanation for what exactly liftM2 does in the context below. Using the cool lambdabot pointless utility I found out that: \x - snd(x) - fst(x) is the same as: liftM2 (-) snd fst I like the elegance of this but I cannot reconcile it with its type. I can't understand it. I check the signature of liftM2 and I get: Prelude :t liftM2 Prelude liftM2 :: (Monad m) = (a1 - a2 - r) - m a1 - m a2 - m r Can someone help me understand what's happening here ? What does a Monad have to do with a simple subtraction ? What is actually the m of my example ? I am sure if I get this I'll be another step closer to illumination ... Does typing :t liftM2 (-) snd into ghc enlighten you at all? Make sure to :m + Control.Monad.Reader first, because this instance unfortunately isn't in the Prelude. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] unexpected compiler behavior
The problem is most likely that line-buffered (or possibly block-buffered) IO is turned on, which means that buffers will only automatically be flushed to the display at the ends of lines. You can fix it by importing System.IO, and either calling (hFlush stdout) explicitly, or by calling (hSetBuffering NoBuffering stdout) at the start of your program. On 05/12/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hi, I am newbie in Haskell and do not understand why the interpreted mode differs from the compiled program. I run this code main = do putStr line1 x-getLine putStr line2 y-getLine return (x++y) either under runhugs or runghc or in a console under ghci and works properly, that is, first displays line1 and then prompts for the corresponding inputs through the keyboard, and so on. However, if I compile the code with the command line ghc --make Main.hs -o main and launch the compiled program main, then at first prompts for the input lines through the keyboard and then displays the strings line1 line2. I have read in the tutorials that the command do implies an ordered execution and this only occurs in interpreted mode. Using the monadic notation , = occurs the same unordered behavior. ¿How can I avoid the unordered execution of this code in a compiled program? Thanks in advance. jepalomar ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] 2D Array
You might want to consider using a DiffArray, though if the backtracking is of just the wrong sort, you might get worse performance than you want. Updates are O(length xs) where xs is the list of updates to be performed (so O(1) in the singleton case), but lookups into older versions of the array get slower by a constant amount every time an update is performed. (The semantics are referentially transparent, the performance is not.) However, updating an old version will cause a fresh copy to be made, and lookups will be fast again after that. On 03/12/06, Tony Morris [EMAIL PROTECTED] wrote: I wish to pass a 2 dimensional array to use in a back-tracking algorithm. Since I will be doing lots of inserts, a Data.Array is unsuitable. It seems that a Map Int (Map Int a) is the most suitable structure, but this seems cumbersome. Is there anything more appropriate? -- Tony Morris http://tmorris.net/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] SimonPJ and Tim Harris explain STM - video
On 23/11/06, Jason Dagit [EMAIL PROTECTED] wrote: A comment on that video said: - BEGIN QUOTE It seems to me that STM creates new problems with composability. You create two classes of code: atomic methods and non atomic methods. Nonatomic methods can easily call atomic ones – the compiler could even automatically inject the atomic block if the programmer forgot. Atomic methods and blocks cannot be allowed to call nonatomic code. The nonatomic code could do I/O or other irrevocable things that would be duplicated when the block had to retry. END QUOTE I imagine an example like this (some pseudo code for a side effect happy OO language): class Foo { protected int counter; // assume this gets initialized to 0 public doSomething() { atomic{ counter++; Console.Write(called doSomething execution# + counter); // something which could cause the transaction to restart } } public doOtherThing() { atomic{ doSomething(); // something which could cause the transaction to restart } } } Now imagine doSomething gets restarted, then we see the console output once each time and counter gets incremented. So one solution would be to move the side effects (counter++ and the console write) to happen before the atomic block. This works for doSomething, but now what if we called doOtherThing instead? We're back to having the extra side-effects from the failed attempts at doSomething, right? We just lost composability of doSomething? I'm assuming counter is only meant to be incremented once per successful run of doSomething and we only want to see the output to the log file once per successful run, but it needs to come before the log output inside doSomething so that the log makes sense. I realize STM is not a silver bullet, but it does seem like side-effects do not play nicely with STM. What is the proposed solution to this? Am I just missing something simple? Is the solution to make it so that Console.Write can be rolled back too? Thanks, Jason The solution is to simply not allow side effecting computations in transactions. They talk a little about it in the video, but perhaps that's not clear. The only side effects an atomic STM transaction may have are changes to shared memory. Another example in pseudocode: atomic x - launchMissiles if (x 5) then retry This is obviously catastrophic. If launchMissiles has the side effect of launching a salvo of missiles, and then the retry occurs, it's unlikely that rolling back the transaction is going to be able to put them back on the launchpad. Worse yet, if some variable read in launchMissiles changes, the transaction would retry, possibly causing a second salvo of missiles to be launched. So you simply disallow this. The content of a transaction may only include reads and writes to shared memory, along with pure computations. This is especially easy in Haskell, because one simply uses a new monad STM, with no way to lift IO actions into that monad, but atomically :: (STM a - IO a) goes in the other direction, turning a transaction into IO. In other languages, you'd want to add some static typechecking to ensure that this constraint was enforced. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Debugging
On 13/11/06, Valentin Gjorgjioski [EMAIL PROTECTED] wrote: I'm pretty new in Haskell, few days since I started learning it. I want to debu my programs. I'm currently using WinHugs, and I prefer debugger for this. I tried googling and I found Hugs.Observer. I like it how it works, but still I have one BIG problem with it. It doesn't work well with floats. Following example import Hugs.Observe ex8 :: [Float] ex8 = (observe after reverse ) reverse [10.0,7.0,3.0,0.0,4.0] gives me ex8 [4.0,0.0,3.0,7.0,10.0] Observations after reverse { \ ($-990871 : $-990888 : $-990905 : $-990922 : $-990939 : []) - $-990939 : $-990922 : $-990905 : $-990888 : $-990871 : [] } First of all, I don't get this behaviour in Hugs 20050308 on Ubuntu. Main ex8 [4.0,0.0,3.0,7.0,10.0] Observations after reverse { \ (10.0 : 7.0 : 3.0 : 0.0 : 4.0 : []) - 4.0 : 0.0 : 3.0 : 7.0 : 10.0 : [] } and: Main ex8 `seq` () () Observations after reverse { \ (_ : _ : _ : _ : _ : []) - _ : _ } So it might just be the version which you have. I think that the $n values are perhaps representations of unevaluated thunks. Try it with a string, or something like ex = (observe after replicate) (replicate 3) (5+5) and see what you get (should be a list with 3 elements that are all the same thunk). Also try: ex8 = foldr seq () xs `seq` (observe after reverse ) reverse xs where xs = [10.0,7.0,3.0,0.0,4.0] to see if forcing the values in the list first causes any change in the output. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Debugging
On 14/11/06, Valentin Gjorgjioski [EMAIL PROTECTED] wrote: Just one more thing If I write ex9 :: [Float] ex9 = (observe after reverse ) reverse [10.0,7.0,3.0,0.0,4.0] it doesn't work. If I delete ex9 :: [Float] then it works fine. any suggestions? This doesn't happen for me. The only thing I can think of trying is to check the type of ex9 by deleting the type signature, loading the file, and typing: :t ex9 on the hugs prompt. If it prints anything other than ex9 :: [Float] then you'll have your answer. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell Debugging
On 14/11/06, Cale Gibbard [EMAIL PROTECTED] wrote: On 14/11/06, Valentin Gjorgjioski [EMAIL PROTECTED] wrote: Just one more thing If I write ex9 :: [Float] ex9 = (observe after reverse ) reverse [10.0,7.0,3.0,0.0,4.0] it doesn't work. If I delete ex9 :: [Float] then it works fine. any suggestions? This doesn't happen for me. The only thing I can think of trying is to check the type of ex9 by deleting the type signature, loading the file, and typing: :t ex9 on the hugs prompt. If it prints anything other than ex9 :: [Float] then you'll have your answer. - Cale Sorry, I just realised what you meant, and I do get the behaviour you initially described without the typesignature there. Actually, without the typesignature, it's inferring the type [Double], due to defaulting. However, adding the typesignature ex9 :: [Double] will again cause it to print the values in the list rather than raw thunks. So perhaps it's because you haven't given a typesignature explicitly, so the list is initially polymorphic, which means that the values in it are actually function calls initially (that is, things like (fromRational 10.0)), and they only end up getting a specialised type later, when hugs finishes compiling the module and applies the monomorphism restriction and defaulting, but hugs isn't going back and optimising the function with the additional knowledge? I don't really know, that's my best guess at it. Maybe someone who knows hugs better would know with more certainty? - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] pleac, examples, etc
On 14/11/06, chris moline [EMAIL PROTECTED] wrote: --- brad clawsie [EMAIL PROTECTED] wrote: it would be great if some of the more informed posters here took a stab at filling in http://pleac.sourceforge.net/pleac_haskell/index.html a neat site for cookbook-style problem solving What I've always found funny about pleac is that none of the examples are actually Haskell, but some weird Haskell-with-oo-features. Does anyone know what language it is? Here are some examples: password = [1..8].mapM (\_ - rand (0, chars.length -1) (chars!)) -- in haskell, regexp are first class, then can be appended, sometimes easier to read m0' s = (s ==~ dec_number) They are actually Haskell, but notice the appendix: http://pleac.sourceforge.net/pleac_haskell/a1102.html It's Haskell with a completely bizarre prelude. No real new features were added to the language itself. Replacing composition with reverse function application is a waste of operator symbols. Personally, if I was going to change (.), it would be to define it as fmap, which (as a number of people, myself included, have pointed out) together with the instance of Functor for ((-) e) would generalise ordinary composition, map, liftM, etc. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] StateT and modify
bar = do x - get y - lift $ myAdd 1 x put y return y If you want, you can write something which captures this idiom: liftModify c = do x - get y - lift (c x) put y and then use that like: bar = do liftModify (myAdd 1) get On 08/11/06, Peter Steiner [EMAIL PROTECTED] wrote: hi haskellers, i have a basic question regarding StateT encapsulating IO and the modify function. my scenario is similar to the following simple code snippet: import Control.Monad.State type MyState = StateT Int IO test = evalStateT foo 0 foo = do modify $ (+) 1 get i would like to be able to debug what's happening inside the modifier function. that's why i want to be able to use a modifier that's in the IO monad, like in the following, obviously defunct snippet: test = evalStateT bar 0 bar = do modify $ myAdd 1 get myAdd :: Int - Int - IO Int myAdd x y = do putStr in myAdd\n return $ x + y this fails because (myAdd :: Int - Int - IO Int) does not match the required modify argument type (Int - Int - Int) for MyState. Couldn't match expected type `Int' against inferred type `IO Int' In the second argument of `($)', namely `myAdd 1' In the expression: modify $ (myAdd 1) In a 'do' expression: modify $ (myAdd 1) is it possible to 'lift' StateT modify into the inner monad (IO in my case)? regards, peter. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] deepSeq vs rnf
On 22/10/06, Chad Scherrer [EMAIL PROTECTED] wrote: Hi, I had posted this question a while back, but I think it was in the middle of another discussion, and I never did get a reply. Do we really need both Control.Parallel.Strategies.rnf and deepSeq? Should we not always have x `deepSeq` y == rnf x `seq` y ? Maybe there's a distinction I'm missing, but it seems to me they're basically the same. I agree, they are the same. The Strategies library also gives much more general operations for working with strictness and parallelisation. That library seems to need more love, I think it's a great idea, but it doesn't really get noticed all that much. The Hierarchical libraries documentation for it is a little lacking -- it doesn't even provide a reference or link to the paper, and many of the combinators, as well as the general idea of how to use it are undocumented from there. It also spuriously contains an Assoc datatype, which if I recall correctly, was an example from the paper, but doesn't really belong in the library as far as I can tell. It would also be really nice to see the list of instances for the NFData class expanded to include other datatypes in the libraries, possibly also with compiler support for deriving, since it's mostly boilerplate. Speaking of boilerplate and the scrapping thereof, Data.Generics could theoretically also be used to write a relatively generic rnf/deepSeq, but in my attempts, it seems to be much much slower than using a specific normal form class. Here's my code from quite a while back. As I recall, it's semantically correct, but ran about an order of magnitude slower. There might be a much better way to do it, I don't really know Data.Generics all that well. rnf :: (Data a) = a - () rnf x = everything (\x y - x `seq` y) (\x - x `seq` ()) x deepSeq x y = rnf x `seq` y f $!! x = rnf x `seq` f x - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] List comparisons and permutation group code
On 19/10/06, David House [EMAIL PROTECTED] wrote: On 19/10/06, Mikael Johansson [EMAIL PROTECTED] wrote: isIdentity xs = all (\(i,j) - i==j) (zip [1..] xs) isIdentity' xs = xs == [1..(length xs)] Then isIdentity 1:3:2:[4..10] finishes in an instant, whereas isIdentity' 1:3:2:[4..10] takes noticable time before completing. Why is this so? I'd have thought that the equality function for lists only forces evaluation of as many elements from its arguments as to determine the answer. In order to determine if [1..length xs] has an element at all, you have to evaluate length xs, which involves forcing the entire spine of xs, because integers can't be partially evaluated. Computing lengths of lists is a great way to introduce strictness. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [off-topic / administrative] List Reply-to
On 11/10/06, Mikael Johansson [EMAIL PROTECTED] wrote: * It penalizes the person with a reasonable mailer in order to coddle those running brain-dead software. I don't agree. I view pine as something that should be classified as reasonable, and I feel penalized by non-munging. When you press R to reply, you should be asked whether you would like to reply to all recipients or not. Choose yes, and the message should be sent to the list. At least, that's how I remember pine working, though it's been about a year now since I used it. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Having typing problem with tan function.
The compiler is complaining that there's no instance of the Floating class for Int because Floating is the class where pi and tan are defined, and it manages to infer that halfWidth must be an Int. How does it do this? Well, assuming that you are using SOE or a similar graphics library, the polygon function expects a list of Points, and type Point = (Int, Int). So, in particular, x1 - halfWidth must be an Int, and so halfWidth needs to be an Int. Hence, you probably want to change the definition of halfWidth to be: halfWidth = round (height * tan (pi / 3)) Also, just as a side note, your indentation is a little unusual. It's more common to align the 'let' with the 'in'. This would also be an appropriate place to use 'where'. hope this helps, - Cale On 12/10/06, Edward Ing [EMAIL PROTECTED] wrote: Hi, I am new to Haskell, so the following problem is probably easy for you to spot but difficult for me. Any pointers would be appreciated. In GHCI on a load I am getting the following message: SSnowflake.hs:41:45: No instance for (Floating Int) arising from use of `pi' at Snowflake.hs:41:45-46 Probable fix: add an instance declaration for (Floating Int) In the first argument of `(/)', namely `pi' In the first argument of `tan', namely `(pi / 3)' In the second argument of `(*)', namely `(tan (pi / 3))' Failed, modules loaded: none. When I put the expression tan ( pi / 3) at the GHCI interpreter prompt I get a value, but this is not accepted on a load the code. Here is the code: code triangle :: Window - (Point,Point) - Int - IO () triangle window ((x1,y1), (x2,y2)) size = letheight = fromIntegral(y2 - y1) * 3 / 2 halfWidth = height * (tan (pi / 3 )) in drawInWindow window (withColor (sizeColorMap size) (polygon [(x1,y1),( x1 - halfWidth, height ), (x1 + halfWidth, height)] )) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: Re: [Haskell-cafe] beginner's problem about lists
On 11/10/06, Nicolas Frisby [EMAIL PROTECTED] wrote: Intuitively q = lfp f = f(f(f(f(f(f (f(f(f _|_)))...)(*) r = lfg g = g(g(g(g(g(g (g(g(g _|_)))...) (**) This way of writing it would imply (at least to me) that the number of f's and g's involved is finite, when that's not really the intention. It's probably better to write something like: q = lfp f = f (f (f (f (... ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Yet Another Haskell Tutorial Question
That's about right, though you could shorten a few of the definitions by matching with the wildcard pattern _ like this: tuple4 :: Tuple a b c d - Maybe d tuple4 (QuadrupleTuple a b c d) = Just d tuple4 _ = Nothing It's also possible to use case: tuple3 :: Tuple a b c d - Maybe d tuple3 x = case x of (QuadrupleTuple a b c d) - c (TripleTuple a b c) - c _- Nothing On 11/10/06, Bill Mill [EMAIL PROTECTED] wrote: I've been working through the Yet Anther Haskell Tutorial at http://www.cs.utah.edu/~hal/docs/daume02yaht.pdf , and I've gotten up to Excercise 4.6. It says: === YAHT === Write a datatype Tuple which can hold one, two, three or four elements, depending on the constructor (that is, there should be four constructors, one for each number of arguments). Also provide functions tuple1 through tuple4 which take a tuple and return Just the value in that position, or Nothing if the number is invalid (i.e., you ask for the tuple4 on a tuple holding only two elements). === /YAHT === I've got a brute-force answer, but I'm sure that there's a better way, and I can't find it. The code I've currently got is at http://paste.lisp.org/display/27806 . Anybody want to point me in the right direction? Thanks Bill Mill bill.mill at gmail.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Haskell performance (again)!
On 09/10/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Cale Gibbard wrote: I might also like to point out that by small and large, we're actually referring to the number of ways in which components of the datastructure can be computed separately, which tends to correspond nicely to how one usually pictures small and large pieces of data. I'm not sure what you mean. I thought it is appropriate to count the number of approximations to a given value v, i.e. count how many v' there are with v' v in the usual CPO ordering, that is Cons _|_ [1,2] Cons 3 [1,2] etc. That count is either large or small. Yeah, that's exactly what I'm talking about. :) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] a monad for secret information
On 09/10/06, Seth Gordon [EMAIL PROTECTED] wrote: I finally (think I) understand monads well enough to make one up: module Secret (Secret, classify, declassify) where data Secret a = Secret a classify :: a - Secret a classify x = Secret x declassify :: Secret a - String - Maybe a declassify (Secret x) xyzzy = Just x declassify (Secret x) _ = Nothing instance Monad Secret where return = classify (Secret x) = f = f x The nice thing about this is that (correct me if I'm wrong) clients of the module can't sneak information out of the monad without either using the right password or by using one of the unsafe*IO methods. (Or by running the program in a debugger. But you get the idea.) The not-so-nice thing is that the literal text of the password is baked into the data definition. I'd like to have a more general version of Secret that allows someone to pass the password in when constructing a secret, and preserves that password when return is used, but doesn't let the second argument of (=) see the password. Something like this:... data Classification pw a = Classification pw a declassify (Classification pw a) pw' = case pw' of pw - Just a _ - Nothing type Secret = Classification xyzzy ...but that doesn't parse. Is it even possible to have a type like this that still observes the monad rules? Is this the sort of thing that I need to understand arrows to pull off? Why not just: secret :: a - Classification String a secret = Classification xyzzy The password string isn't part of the type, it doesn't even necessarily exist at compile time. You might have just got confused between type and data constructors for a moment. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] [Offtopic] Re: Re: A better syntax for qualified operators?
What a beautiful world this could be... ;-)) (*) Cheers, Ben (*) Donald Fagen (forgot the name of the song) I think I.G.Y. (International Geophysical Year) is it: On that train all graphite and glitter Undersea by rail Ninety minutes from New York to Paris (More leisure time for artists everywhere) A just machine to make big decisions Programmed by fellows with compassion and vision We'll be clean when their work is done We'll be eternally free yes and eternally young What a beautiful world this will be What a glorious time to be free ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Java or C to Haskell
How about something like this? import Data.List findMatch xs ys k = lookup k . concat $ zipWith zip (substrings xs) (substrings ys) where substrings = nonempty . map (nonempty . inits) . tails where nonempty = filter (not . null) On 20/09/06, Matthias Fischmann [EMAIL PROTECTED] wrote: ... and if you want to search strings not single characters: findmatch s t e = take m . drop n $ t where m' = length e (n, m) = f 0 s f i s | take m' s == e = (i, m') | null s = (0, 0) | otherwise = f (i+1) (tail s) findmatch asdfasdf asdfxvdf fas == fxv (this one skips equality checks before *and* after the match. feel free post the necessary modifications. :) matthias On Wed, Sep 20, 2006 at 02:22:29AM -0700, Carajillu wrote: To: haskell-cafe@haskell.org From: Carajillu [EMAIL PROTECTED] Date: Wed, 20 Sep 2006 02:22:29 -0700 (PDT) Subject: Re: [Haskell-cafe] Java or C to Haskell Yes, they must be equal the whole way, I like this recursive solution :) Ketil Malde-3 wrote: Carajillu [EMAIL PROTECTED] writes: compare function just compares the two lists and return true if they are equal, or false if they are not. find_match 4*ha 4*5a 'h' returns '5' (5 matches with the h) find_match 4*ns 4dhnn k returns '' (no match at all - lists are different anyway) Must they be equal the whole way, or just up to the occurrence of the searched-for character? find_match (x:xs) (y:ys) c | x==c = Just y | x/=y = Nothing | True = find_match xs ys c find_match [] [] _ = Nothing Or, to check the whole list: find_match (x:xs) (y:ys) c | x==c xs == ys = Just y | x/=y = Nothing | True = find_match xs ys c find_match [] [] _ = Nothing -k -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFFER17TXPx/Y0ym6oRAvNZAKCrLeJQxP0PjJAOz2KDi/S0hi7/ywCeMOfH XIOJJcMs9yFsg2IajkmHX7Y= =+bkI -END PGP SIGNATURE- ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: map (-2) [1..5]
On 08/09/06, Brian Hulley [EMAIL PROTECTED] wrote: Jón Fairbairn wrote: Brian Hulley [EMAIL PROTECTED] writes: Cale Gibbard wrote: Anyway, the point of all this is that 0,1,2... are not really literals at all. They're nullary operators which give particular elements of any given instance of Num. Perhaps at some level in the compiler after performing the fromInteger transformation they may be taken as literal integers, but there is no reason that this level has to be exposed to the user. This seems very theoretical to me. One of the most valuable aspects of Haskell is it's theoretical underpinning. Don't play fast and loose with that! You're right. I should have made a proper argument so here goes: 1) Num a implies that forall a, there is an additive inverse for a (given by (negate)) 2) I think Cale is saying that it is sufficient to have overloaded nullary operators 0 and 1 in conjunction with the (+) and (negate) methods of Num, in order to construct all elements of Num 3) I think also Cale's argument is that the other nullary operators, 2, 3, ... are just there for convenience, but that this convenience is a good thing. So my argument would be that given that we only actually need 0 and 1, but we already have 2, 3,... for convenience, there is no reason to not also have -1, -2, -3, ... as well, for even more convenience, especially as the numeric hierarchy starts with a structure where every element is required to have an additive inverse. My counterargument here is that this doesn't actually increase convenience. In fact, for a lot of reasons, it decreases convenience. [1] - is a varsym. The logical way of achieving what you suggest (ie -1 -2... as constructors for Integer) would be to make it introduce a consym the way : does, but then it couldn't be an infix operator anymore. I don't think it's necessary to use the same kind of rule for '-' as for ':'. The rule could be that a '-' immediately followed by a digit starts a negative number, otherwise the '-' is treated as it is at the moment (ie either the start of a comment or a varsym). I imagine that almost every editor at least does lexical fontification, and if so, then I don't think there could be much confusion in practice between these uses of '-'. Also, the fact that we want to allow pattern matching against negative integers suggests that positive integers and negative integers should have equal status regarding literal representation (more precisely: nullary operator overloaded for the integers), rather than having to introduce a special pattern matching rule for the negatives. Pattern matching on floats is an abomination, definitely a candidate for removal. Seconded. negate (expNat 4 2) because this would free the ^ symbol for some more widely applicable use, and would also make the particular choice of exponentiation operator more explicit Agreed, though I'd want expt to be part of a typeclass (possibly multi-parameter to get exp:: Integral a = a - Natural - a as an instance?). Yes, a typeclass for exp would be ideal (and a newtype for Natural). Num itself needs to be split, but we can't do it sanely without something like class aliases. I actually think that the (^), (^^), (**) distinction is rather refreshing to see in a programming language. Most languages don't take the care to distinguish that there are actually different levels of definition of exponentiation. You can't just merge them -- have you looked at their types? (^^) can't be defined for Integers, but it works fine for Rational bases, (**) can't be defined for Rational bases, but works fine for floating point. Additionally, these operations have different performance and numerical properties. If we have a typeclass for this, it will be multiparameter, and there will not be a functional dependency. This could create additional annoyance in various situations. In mathematics, these operations are given the same notation, but are definitely distinguished between by the humans using them. Perhaps we could even use (neg) instead of (negate) in deference to the 3 letter varids used for other common arithmetic ops, to get: neg (exp 4 2) neg y * 56 (neg can also be read declaratively as negative, instead of the imperative negate.) I obviously already disagree with this whole bit anyway, but eek, don't steal exp. We already have a very important function (in fact, I'd be willing to say that it's probably the most important function in all of mathematics) called exp, and it's named after its standard name in mathematics. Also, not using infix notation for exponentiation would obfuscate many simple mathematical programs. Anything involving a polynomial will look terrible. (I don't care if you're a C programmer and you're used to polynomials looking terrible, I'm not.) I don't really care too much either way with negate - neg, I know some people hate abbreviations in the libraries, but that one is probably sufficiently
Re: [Haskell-cafe] map (-2) [1..5]
On 17/08/06, Brian Hulley [EMAIL PROTECTED] wrote: Jared Updike wrote: In other words, superscripts bind tighter than prefix ops but prefix ops bind tighter than infix. I see. My point is that there already exists a convention[1] that the way to type in 2 -4 is -4^2 which means -(4^2) not (-4)^2 because - as a prefix op has the same precedence as binary subtraction, not super tight like normal prefix ops (i.e. normal function application) as you would like it to be (if I understand correctly). You are welcome to break an existing (unofficial) convention for the sake of lexical syntax[2]. [2] http://wadler.blogspot.com/2006/01/bikeshed-coloring.html This choice of precedence for unary - conflicts with the normal usage in languages like C, where unary ops obviously bind tighter than infix. The typesetting in maths conveys a lot of info eg to distinguish f -x from f - x or f-x, and so the relationship between the visual representation and the meaning depends on a knowledge of various conventions that have evolved over time, and the knowledge of when to apply them in a given context. In contrast, a programming language should be based on general concepts uniformly applied. In Haskell we have operators, identifiers, prefix application using an identifier and infix application using a symbol, and a uniform way to convert a symbol to an identifier and vice versa, and a uniform way of forming sections. All this machinery should be enough to be satisfied with. However, someone somewhere decided that one trivial arithmetic operation, namely unary minus, should be allowed to totally ruin everything, and not only that, but that half of any number line, the positives, should (literally!) have a special advantage over the other half, the negatives. Thus while I can agree with Wadler that it's easy to have different opinions on little issues, I think that in this case the goal of uniformity leads to an objective answer. Of course not all languages care about being uniform or neat ;-) Best regards, Brian. First, f - x, f -x, and f-x all tend to mean the same thing in mathematics, though f -x would be considered poorly typeset (and, to some degree, they're all poorly typeset, because we're using hyphens rather than the minus symbol, which really don't look the same). We tend to write f(-x) when applying a function f to the negation of x, even in circumstances when application is normally written without parentheses. Secondly, I think it's quite a reasonable thing to do to treat unary negation as a separate operation. It follows quite naturally to do so from the definition of a ring. While having separate literals for negative numbers might be okay, it seems unnecessary in light of the fact that we *do* want a nice looking unary negation symbol, which doesn't strictly apply to literals. If -x suddenly became a non-expression, and I had to write 0-x, -1*x or (negate x) instead, I'd consider that a severe enough bug that I would avoid upgrading my compiler until it was fixed. In mathematics, we don't use separate symbols for negative integers, and negated positive integers, even though in the underlying representation of the integers as equivalence classes of pairs of naturals, we can write things like -[(1,0)] = [(0,1)], which expressed in ordinary notation just says that -1 = -1. This doesn't bother us, because the two things are always equal. Another thing to note is that all the natural literals are not, as one might initially think, plain values, but actually represent the embedding of that natural number into the ring (instance of Num), by way of 0 and 1. They simply provide a convenient notation for getting particular values of many rings, but in many cases, don't get one very far at all before other functions must be introduced to construct the constant values one wants. While there always is a homomorphism from Z to a ring (represented in Haskell by fromInteger), one would get similar expressiveness by with just the nullary operators 0 and 1, and the unary negation as well as addition and multiplication (albeit with an often severe performance hit, and some annoyance, I'm not recommending we really do this, simply characterising the purpose of numeric literals). If the performance issue regarding the polymorphic literal -5 meaning negate (fromInteger 5) is a problem, it would be easy enough to agree for the compiler to find and rewrite literals like that as fromInteger (-5) instead, where -5 is the precomputed integer -5. Assuming that fromInteger is not broken, that will always mean the same thing (because fromInteger is supposed to be a homomorphism). Similarly, when the type of (fromInteger x) is known statically to be Integer, the compiler can rewrite it as x. In any event, this is a tiny constant factor performance hit. Anyway, the point of all this is that 0,1,2... are not really literals at all. They're nullary operators which give particular elements of any given
Re: [Haskell-cafe] Problem trying to get class Bounded to work
Hello, There's a prelude function called asTypeOf which might help here: asTypeOf :: a - a - a asTypeOf = const You can use maxBound `asTypeOf` (fst3 . head $ elements) Another option is GHC's scoped type variables, which you use by adding a type signature to your pattern. See: http://www.haskell.org/ghc/docs/latest/html/users_guide/type-extensions.html#scoped-type-variables - Cale On 22/05/06, Brian Hulley [EMAIL PROTECTED] wrote: Brian Hulley wrote: Hi - I've got the following function which doesn't compile: createMonoMultiFont :: (MonadException m, Enum i, Bounded i) = [(i, Font, Colour)] - m (MonoMultiFont i) createMonoMultiFont elements = liftIO . block $ do mmfRaw - duma_MonoMultiFont_create (fromEnum (maxBound::i)) mmfPtr - newForeignPtr duma_MonoMultiFont_Release mmfRaw return $ MonoMultiFont mmfPtr The problem is that ghc complains that there is no instance for Bounded i even though I've written Bounded i in the type declaration for the function. Here is a gross hack that compiles: createMonoMultiFont elements = liftIO . block $ do let (onei,_,_):_ = (undefined, undefined, undefined) : elements upper :: Bounded a = a - a upper _ = maxBound last = upper onei mmfRaw - duma_MonoMultiFont_create (fromEnum last) -- adding of elements to mmfRaw ommitted for clarity mmfPtr - newForeignPtr duma_MonoMultiFont_Release mmfRaw return $ MonoMultiFont mmfPtr Still, it would be nice to be able to just write (maxBound::i) since the dictionary for i is present in the result. Brian. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Newbie: Haskell Sine Oddities
As others have pointed out, floating point representations of numbers are not exact. You don't even have to use fancy functions like sine to see all kinds of nice algebraic properties break down. let x = 1e8; y = 1e-8 in (y + x) - x == y + (x - x) evaluates to False. So from this you can see that addition is not even associative, neither is multiplication. Distributivity also fails in general. Floating point computations are always approximate and have some level of error associated with them. If you want proper real numbers, things like equality testing become impossible in general. If you look around, I think there are a couple of libraries in Haskell which let you work with arbitrary precision reals though. Try: http://www.haskell.org/haskellwiki/Exact_real_arithmetic - Cale On 29/04/06, Aditya Siram [EMAIL PROTECTED] wrote: The Sine function in the prelude is not behaving as I expected. In the following Hugs session I tested sin on 0, 90,180 360 degrees. Prelude sin 0 0.0 --correct Prelude sin (pi/2) 1.0 --correct Prelude sin pi 1.22460635382238e-16 --WRONG! Prelude sin (2*pi) -2.44921270764475e-16 --WRONG! Is this normal behaviour? Or am I using the trig functions in an unexpected way? Thanks... Deech ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Newbie Instance Of Floating Int Error
One thing to try: type Point = (Float, Float) The problem is that x1, y1, x2, and y2 are Ints, and so (x2-x1)^2 + (y2-y1)^2 is also an Int, but then you try to take the square root, which is an operation only available on floating point values (more specifically, types in the class Floating, which also includes things like Complex Double). Another option is to leave the Point type as-is, but do a conversion just before applying the sqrt, say, by applying the fromIntegral function, which serves to convert from types in the class Integral, like Int and Integer to any numeric type you need. distBetween (x1,y1) (x2,y2) = sqrt . fromIntegral $ (x2-x1)^2 + (y2-y1)^2 hope this helps, - Cale On 27/04/06, Aditya Siram [EMAIL PROTECTED] wrote: Hi all, I just started working with Haskell and am having some difficulties with its type system. Here is function that is supposed to calculate the distance between two coordinates: distBetween (x1,y1) (x2,y2) = sqrt((x2-x1)^2 + (y2-y1)^2) I am trying to explictly give it a type signature. Here is what I have tried and the errors generated by Hugs: type Point = (Int,Int) distBetween :: Point - Point - Float ERROR - Type error in explicitly typed binding *** Term : distBetween *** Type : Point - Point - Int *** Does not match : Point - Point - Float distBetween :: Point - Point - Int Instance of Floating Int required for definition of distBetween Any help is appreciated... Deech ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Current situation regarding global IORefs
On 24/04/06, Adrian Hey [EMAIL PROTECTED] wrote: Lennart Augustsson wrote: I think global mutable variables should be regarded with utmost suspicion. There are very few situations where they are the right solution. Well IMO even the use of the term global mutable variable causes muddled thinking on this and I wish people would stop it. There's no reason to regard top level things with identity (call them TWIs or objects or whatever) with any more suspicion than top level IO actions themselves. What do you mean by top level IO action? If you mean something like 'getLine', then there is a huge difference. If you mean actions which execute automatically in any program which imports the given module, then I'd contend that Haskell doesn't really even have those (apart from possibly 'main', if you count that at all). In the latter case, you might have disconnected components which all must run, but since they do IO, are potentially noncommutative. With the ability to mark actions as automatically executing without some way to control the order of that execution, program behaviour is unpredictable in general. At least, I wouldn't want the language to allow that. However, putting an explicit ordering on their execution is essentially equivalent to defining a new IO action which executes each of them in a given sequence, obviating the need for such a feature in the first place. While only allowing the definition of top-level IORefs (i.e. not unrestricted IO) wouldn't cause quite as much harm, it's still questionable as to whether it's ever actually necessary. One can get computation-global IORefs easily using something along the lines of ReaderT IORef IO. By newtyping the monad, one could exert even more control over how the IORef was used. One problem with real top-level IORefs is that it leaves no way for the module user to reset things in general. As a library designer, you might think that the users of your library will only ever want to initialise things once, but this is being a little bit overly forceful -- what if the program has reloaded its configuration while running and wants to restart the functionality your module provides? If you haven't explicitly provided a way to reset the IORefs, there's no way for the module user to reliably do so. There are plenty of other nice ways to enforce things such as particular actions only running once and so on. For one, you could use a new monad, which would give extremely fine control over what actions were permitted, but even with just IO, there are plenty of things one can do -- just not globally. We can relatively easily create a context which provides an IO action that will only run once in that context, and return the same value as the first time without repeating the execution thereafter: singletonRegion :: IO a - (IO a - IO b) - IO b singletonRegion action region = do r - newIORef Nothing let action' = do t - readIORef r case t of Nothing - do v - action writeIORef r (Just v) return v Just v - return v region action' test1 = singletonRegion (putStrLn Hello) $ \greet - do greet -- only this one will actually have any effect. greet greet With a custom monad, this sort of setup would be built into the run function which transforms the action back into IO, and so would be even more transparent, while still not creating irrevocable program-global state. One thing I never could fathom about the position of the nay-sayers in this debate is what exactly is it that they object to? Is it existence of top level TWIs and of IO operations that reference them *in principle*? Or are they content with their existence but just don't want to allow people to use Haskell to define them? If it's the former then we should be purging the IO libraries of all such horrors, though I can't see much remaining of them (well anything actually). But I guess an IO incapable language might still have some niche uses. If it's the latter then we are advocating a language which cannot be used to implement many demonstrably useful IO modules and libraries, *by design*. If so, the claim that Haskell is a general purpose programming language seems quite bogus and should be removed from the haskell.org home page IMO. This argument sounds quite a lot like the ones used by those arguing against the removal of GOTO from programming languages. Sometimes, by explicitly not providing some feature, you're also making code in general easier to comprehend (especially when that feature has invisible global effects on programs, where you'd need to look carefully at the source of all modules involved in order to determine what was actually happening in the presence of its use). It can also prevent certain classes of design problems from ever cropping up. The more state which one has to reconstruct in order to test an arbitrary
Re: [Haskell-cafe] Current situation regarding global IORefs
On 21/04/06, Brian Hulley [EMAIL PROTECTED] wrote: Robert Dockins wrote: On Apr 21, 2006, at 10:34 AM, Brian Hulley wrote: Robert Dockins wrote: On Apr 21, 2006, at 9:56 AM, Brian Hulley wrote: Hi - I've run into the global mutable state problem described in http:// [snip] There is only one GUI for the application and only one control in it can have the keyboard focus so it seems natural to use global state here I'd suggest you consider not making those assumptions... they are the kinds of assumptions that can make later code reuse and maintenance more difficult than it should be. (Obviously, if code reuse/ maintenance is a low priority then it doesn't matter). , but I suppose I could also look into using a state monad. The advantage (perhaps also disadvantage ;-) ) of global state is that it allows me to easily convert all my old C++ singleton classes to Haskell modules... ramble type=somewhat coherent Ahhh... the singleton pattern. There is a debate among OO theorists about whether the singleton pattern is actually a good idea. I tend to side with those who say that it is Just Wrong. [snip] Thanks for the comments. I've now changed everything so that controls use a ManagerM monad which wraps up the state instead of using the IO monad so there are no longer any global variables. It wasn't as difficult as I had thought and as you say it makes everything much more scalable, although at the expense of having to use liftIO in various places. I've defined my state monad by: data MState = MState {keyboard:: !Maybe Control} -- etc - other state here also type ManagerM a = StateT MState IO a and everything works ok. However if I try to use a newtype instead of a type (to completely hide the representation) eg newtype ManagerM a = ManagerM (StateT MState IO a) deriving (Monad, MonadIO, MonadState) it won't compile. Does this mean it is not possible to wrap combined monads in a newtype? I notice that the examples in tutorials I've looked at tend to always just use type instead of newtype. try deriving (Monad, MonadIO, MonadState MState) -- I find that newtype deriving doesn't like guessing at other class parameters, even with the fundeps there. Another point is that I'm not sure what is the proper way to represent the state itself ie should each component of the state be a separate IORef to avoid having to copy the whole state each time or is it better practice to just use an immutable record as I've done above? If you were to use IORefs/MVars, it would likely be enough to use ReaderT instead of StateT, since you likely wouldn't be replacing your mutable cells, just their contents. Both routes are okay -- note that Haskell data structure nodes are usually just bunches of pointers to values (or more properly, code which returns values) anyway, so you should only be copying a few pointers when updates are made. (In a similar way to how replacing the head of a list is constant time and space, and not linear.) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Advice needed on best way to simulate an STL vector
Well, you could use something like C++ does for implementing STL vectors in terms of arrays -- iirc, it generally doubles the allocated size each time applying an operation to the vector would make it exceed its capacity (the current allocation size), and keeps track of the actual number of elements stored separately. It's important to do allocation which is proportional to the existing capacity, or repeated insertion will become quadratic time rather than linear. So essentially, some data structure like data Vector a = Vector { store :: Array Int a, end :: Int } would be a sufficient minimal way to start. When the store needs to be increased, you simply create a new array with twice as many elements, copy the initial elements in from the old array which is then thrown away, and only update end to the position of the actual last element. This is analogous to what C++ implementations of the vector class do. What will bite you is if you try to generalise from indices of type Int to instances of Ix -- the Ix operations assume that there are lower and upper bounds on indices. The upper bound of course quickly becomes a problem. You could however use Enum, which has toEnum and fromEnum, which are sufficient to use with the implementation in terms of Ints. It could also be claimed that Int isn't always the best initial type to use, and indeed I'd still feel safer with Integer somehow, but then, fromEnum and toEnum use Int, and if you have more than 2 billion elements in your vector at the same time, maybe you should be looking at your algorithm more carefully and/or doing your own low level memory allocation via FFI. :) - Cale On 19/04/06, Brian Hulley [EMAIL PROTECTED] wrote: Hi - In C++, STL provides a vector class which behaves as an array except you can insert/delete elements from it. I'm wondering what is the best Haskell data structure to use to simulate this, either mutable or immutable. I've looked at the Array interface, but although there is the // operation to replace some elements, there does not appear to be a simple way to delete/insert elements. Ideally I'd like functions like: -- Insert new elements starting at the specified index, moving the others up to make room insert:: Array i e - i - [e] - Array i e -- Delete a range of elements, moving later elements back down delete:: Array i e - i - i - Array e -- Append a new element to the end of an array push_back :: Array i e - e - Array i e Is there an efficient way of implementing these operations in Haskell, based on arrays, or should I be using some other data structure altogether eg a list? Also, for large arrays, am I right in thinking that it is still more efficient to use immutable arrays rather than mutable arrays (because it is easier for gc to always just deal with immutable values)? Thanks, Brian. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Advice needed on best way to simulate an STL vector
I should perhaps point out that in the development GHC (iirc), there is a library called Data.Sequence which uses 2-3 finger trees to get rather nice efficient sequences. Operations on both ends (appending or dropping) are constant time, while operations in the middle tend to be on the order of the logarithm of the distance to the closer of the two ends. For instance, concatenation and splitting at a point both have complexity proportional to the logarithm of the smaller of the two parts involved. - Cale On 19/04/06, Brian Hulley [EMAIL PROTECTED] wrote: Thanks. I might try this if I don't have any luck with finger trees (from Udo's post), or if they seem too heavy for the simple thing I'm planning to use them for (implementing the text buffer for an edit control which needs a mutable array of lines where each line contains a mutable array of character info). I don't need non-Int indices so your data type for Vector would be fine. Best regards, Brian. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] casting numerical types
Arbitrary numerical types don't support division. (They're rings, not fields.) The (Fractional a) = [a] - a type is the best you can do. - Cale On 01/04/06, Matthias Fischmann [EMAIL PROTECTED] wrote: hi all, this should be a quick one (for now I would be happy with a that's impossible, just want to make sure I don't miss anything). I want to compute the average from a list of arbitrary numerical element type. I wanted to do this: avg :: (Num a) = [a] - a avg xs = sum (map fromNum xs) / (fromIntegral (length xs)) but it doesn't compile. All I could get to work is this: avg :: (Fractional a) = [a] - a avg xs = sum xs / (fromIntegral (length xs)) avgI :: (Integral a) = [a] - Float avgI = avg . map fromIntegral The two function names for the same thing are tolerable, but not very elegant. Is there another option? Thanks, Matthias -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.1 (GNU/Linux) iD8DBQFELsI5TXPx/Y0ym6oRAuQuAKDJzqus2beMm5WKfJBupPTesm6XcQCgjohh yKhGfl6Wv3t97ZGfBYTXKQM= =mmOH -END PGP SIGNATURE- ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] The simple programming language,
Well, I'm sure that lots of people here would be willing to help if you'd be more specific about what you're having trouble with. Just posting an assignment problem tends not to generate much discussion until after the assignment deadline. What have you already tried? What part are you stuck on? Also, if you're interested in more immediate responses, try #haskell on irc.freenode.net. For help with asking good questions, check out http://www.haskell.org/hawiki/HomeworkHelp - Cale P.S.: (Just to let everyone know, this is an assignment problem from the CSA 1080 Declarative Programming course at the University of Malta. The relevant URL is here: http://www.cs.um.edu.mt/~gpac1/ I've gotten a few questions about it lately, and there was a post to the libraries list about another problem on it.) On 26/03/06, pinturicchio [EMAIL PROTECTED] wrote: if somebody can help me wid this i wil b realy thankful Finally, we come to the program. The simple programming language, which includes loops, conditionals and sequential composition will be expressed a datatype in Haskell. data Prg = Skip | PrintS String | PrintE Exp | Declare Variable | Variable := Exp | Prg : Prg | IfThenElse Exp (Prg, Prg) | While Exp Prg The program Skip does nothing. The program PrintS 'prints' the given string to the output. Similarly, PrintE prints the value of the given expression to the output. Declare declares a variable, assigning its initial value to 0. The := operator performs variable assignment. The : operator is sequential composition, while IfThenElse and While are conditional and while-loops respectively. The condition in both cases resolves to true if the value of the expression is not zero. A factorial program can be programmed in this small language as follows: counter = counter result = result factorial n = Declare counter : Declare result : result := Val 1 : counter := Val n : While (Var counter) ( result := (Var result) :*: (Var counter) : counter := (Var counter) :-: (Val 1) ) : PrintS (Factorial of ++show n++ is:) : PrintE (Var result) (a) The simulate function takes a Store and a program, and returns an updated store and list of output strings: simulate :: Store - Prg - (Store, [String]) Using pattern matching, define simulate for Skip, PrintS and variable declarations. (b) Add the cases for PrintE and variable assignment. (c) Sequential composition can be defined by simulating the two instructions in sequence. Complete the following case: simulate store (p1 : p2) = let (store', outs1) = simulate store p1 (store'',outs2) = simulate store' p2 in ... (d) Define simulate for conditionals. (e) We can 'cheat' by simulating while-loops in terms of conditionals and sequential compositi simulate store (While condition program) = simulate store (IfThenElse condition ( program : While condition program , Skip ) ) Using this definition, test your program with the factorial program. -- View this message in context: http://www.nabble.com/The-simple-programming-language%2C-t1344638.html#a3596582 Sent from the Haskell - Haskell-Cafe forum at Nabble.com. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building Monads from Monads
On 23/03/06, Daniel McAllansmith [EMAIL PROTECTED] wrote: Hi, I've got a few (9) random questions, mainly about monads and building monads from existing monads, partly trying to confirm conclusions I've come to through experimentation. Any, and all, attempts to enlighten me will be much appreciated. Thanks Daniel First, terminology. In StateT s (ReaderT r IO) () Q. 1) StateT is referred to as the outermost monad, and IO as the innermost monad, correct? Yeah, that's the somewhat informal terminology. Probably better would be that StateT is the outermost monad transformer, and IO is the transformed monad, or base monad. Using a monadic function, eg MonadReader.ask, in a monadic expression will access the outermost monad of the appropriate class. Q. 2) Does this work for all monad classes in all expressions? No, basically, applying ask will use the version of ask for your particular monad. (Including all transformers.) Various instances of MonadReader are used to automatically get reader instances on transformed monads in various cases involving the MTL transformers, but not in all cases. (Read the list of instances for MonadReader to find out exactly which monad transformers preserve it.) If there's no instance, you have to write one yourself. Also, when you're newtyping a monad which is an instance of MonadReader, you can use newtype deriving to get an instance for the newtype automatically. How does Control.Monad.Trans.lift work? It seems that a single application of lift will find the next outermost monad of the appropriate class, but if you want to dig deeper into the nest you need to apply lift according to the monads actual depth in the nest. Q. 3) Why the different behaviour? Lift is best understood via its type: lift :: (MonadTrans t, Monad m) = m a - t m a it simply takes a value in the base monad, and lifts it into the transformed monad. When you have a stack of transformers, you may have to apply it multiple times if you want to lift something up from one monad, through a stack of transformations of that monad. For example, I might be working in the StateT Integer (ReaderT String IO) monad, and want to get an analogue of (print Hello) which is of type IO () in my monad. First I apply lift to it, to get a value in (ReaderT String IO ()), then again to get something of type StateT Integer (ReaderT String IO) (). That's all it does - there's no magic with locating applications of transformers or anything like that, it just goes one level each time. However, there's also liftIO, which is a special case for when the base monad is IO -- this lifts an IO action into any monad which is an instance of MonadIO. This class is preserved by most monad transformers, and is satisfied by IO, so the end result is like applying lift enough times to bring an IO action up through as many transformers as necessary, but without having to know how many beforehand. Q. 4) Is it possible to give a type to the lifted function so that the monad of the correct class _and_ type is used? E.g. dig into a String Reader rather than an Int Reader. I'm not completely sure what you're after here -- basically, you just lift things into whichever monad you're using. If you want to be polymorphic, but insist on a particular instance of MonadReader, that's easy enough, just put a constraint like (MonadReader String m) or something similar on your type. Defining an instance of MonadTrans for a monad instance seems universally useful. Q. 5) Are there obvious situations where it's not useful or possible? MonadTrans is only for monad transformers. Actual monads can't be turned into transformers into any automatic way. However, in a lot of cases, it's quite natural and obvious how to write a monad transformer, such that applying that transformer to the identity monad gives the monad you were thinking of (for example, writing code for StateT instead of State), and when this is the case, you usually should, since it's usually not much extra trouble, and it buys you a lot of extra flexibility later. Carrying out IO in a nested monadic expression requires liftIO. Apart from having to type an extra 7-9 characters it seems good to use liftIO even in plain IO monad expressions so they can become nested expressions with no trouble later on. Q. 6) Is it safe to always use liftIO, even in plain IO monad? It's safe, sure, though a little awkward. It's easy enough to lift whole IO computations later on anyway. The only benefit would be if you wanted to later intersperse actions into the code which came from a transformed version. Q. 7) If it's safe to do, why aren't functions in the IO monad just typed in the MonadIO class instead? First of all, historical reasons -- the MTL is newer than the IO monad by a good bit, and it doesn't exist in Haskell 98. While it would be nice to have automatically lifted IO actions, it's actually fairly rare that this actually gets in your way. The biggest problem
Re: [Haskell-cafe] A syntax proposal: data replacements
On 08/03/06, ihope [EMAIL PROTECTED] wrote: I'd like to propose more syntactic sugar more Haskell. (A spoonful of syntactic sugar makes the medicine go down...) Put simply, this would provide a smallish bit of pattern matching, and hopefully clarify some things. A simple example should pretty much define the whole thing: fromJust = {Just -- id; Nothing -- error Nothing} This function takes a single value. If the constructor used to construct it is Just, then the Just is replaced with id. Likewise, if it's given Nothing, it will return (error Nothing). Another example: consPair = {(,) -- (:)} This takes a pair (x,y) and replaces the constructor (,) with (:), yielding (x:y). Now, a final clarification: foldr cons nil = {(:) -- cons; [] -- nil} This will *not* work: only the top bit of list will be replaced this way, as data replacements are not recursive. If you're going to do this at all, why not make them recursive? I suppose that what you want is somewhat the same as allowing partial application of a constructor in pattern matching, so perhaps there is a syntax which would reflect that. However, fromJust and consPair are not awkward functions to write. It might be nice to have some mechanism for generalised catamorphisms on algebraic datatypes though. What you're describing is easy to get on any particular type you want. (Not the exact syntax, but the effect.) For example, fromJust = maybe (error Nothing) id consPair = uncurry (:) using the catamorphisms for Maybe and (,) respectively. It would be nice if there was an automatic way to construct functions like 'maybe' and 'uncurry' and 'foldr' from the structure of the types alone. You can do that, and many more things, in Generic Haskell. There's also some provision for generics in GHC, but I'm not sure how well maintained it is. The docs say that it is currently broken in 5.02 -- I seem to remember trying it more recently and having it work on a few small tests, but I didn't do much more than play around, so I don't know if it's horribly flawed or anything. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hacking Hawk
You may wish to read http://www.haskell.org/haskellwiki/Arrays which is quite a good tutorial on how the various array types and classes work. - Cale On 01/03/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote: Hawk didn't updated from GHC v4.04. It does contain dependencies on functions that are present in my current (6.04) GHC distribution, but they are not documented. Those functions and constructors are (just a error log from ghc): --- AQ.lhs:17:16: Not in scope: type constructor or class `STArray' AQ.lhs:82:14: Not in scope: `newSTArray' AQ.lhs:92:21: Not in scope: `writeSTArray' AQ.lhs:258:27: Not in scope: `readSTArray' AQ.lhs:262:27: Not in scope: `writeSTArray' --- They are in GHC.Arr module now. I found that googling for their name. They also changed their types so that they are not compatible with Control.Monad.ST (previous versions were compatible with LazyST). What does they do? Is it possible to rewrite them for current version of ghc? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Layout rule (was Re: PrefixMap: code review request)
On 28/02/06, Brian Hulley [EMAIL PROTECTED] wrote: Why? Surely typing one tab is better than having to hit the spacebar 4 (or 8) times? I'm really puzled here. I've been using tabs to indent my C++ code for at least 10 years and don't see the problem. The only problem would be if someone mixed tabs with spaces. Since it has to be either tabs only or spaces only I'd choose tabs only to save keystrokes. I suppose though it is always going to be a matter of personal taste... It's easy to configure most editors (vim and emacs included of course) to treat multiple spaces as if they were tabs, but to only save spaces into your file. This is what I do, as it ensures that the way that the code looks to me in my editor is exactly what it looks like to the compiler. Quite often, if it looks better, I will align things past a tab stop with a few extra spaces (which only has to be done once, if your editor will start the next line at the same indentation as the previous). - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: haskell programming guidelines
On 01/03/06, Christian Maeder [EMAIL PROTECTED] wrote: In a more realistic example, the current dollars help to improve readability, I think, and that is my argument why $ should be right- associative: map (+ 1) $ filter (/= 0) $ Set.toList l An additional $ before the final argument ( $ l) looks stupid to me. I also find additional parentheses instead of the dollars more confusing, because of the other parts in parentheses. If you don't like map (+ 1) . filter (/= 0) . Set.toList $ l then map (+ 1) . filter (/= 0) $ Set.toList l works just as well. In this case it's also a fairly natural way to break up the thought process. Your main computation is solely a composition of list functions, and the conversion is part of how you get the input to it. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskell programming guidelines
On 28/02/06, John Meacham [EMAIL PROTECTED] wrote: On Tue, Feb 28, 2006 at 01:09:03AM -0500, Cale Gibbard wrote: Well, the benefit of the Identity monad is so that the user of a routine can choose to recover gracefully by using a different monad, you only use the Identity monad when you are making a choice to bottom out on errors. using 'error' directly is not an option in said cases because it would take away the ability of the user of a routine to catch errors properly. error should only be used for reporting bugs that should never happen, not user visible failure. I'd argue that it would be better for the user to simply catch the value returned which indicates error explicitly, and throw the error themselves. This indicates that they have put thought into the fact that the function may fail. so does using runIdentity, that is the point of it. You are saying I want failure to bottom out, just like using it as a 'Maybe' means you only care about whether it has a result or using it as a 'Either' means you want the result string or using it as a WriterT Foo IO means you want to possibly collect some results and have fail throw an IO exception. I consider it bad style to spend code on cases you never expect to happen, if it takes too much work to write code that fails properly on bugs, people arn't (and definitly should not have to) do the extra work, they will just write code that fails poorly. Monadic failure is absolutely great for writing robust, concise, code. be handled, the user of it should. Right, which is why minimal types for expressing the failure should be used, and the user should convert from those types to whatever larger environment they have in mind. If your function is simply partial, use Maybe, if you want to report error strings, use Either String. These types easily lift into any monad which support similar functionality. It also gives the users of your library more information about the exact way in which your functions may fail, just by looking at the type signatures, and gets them thinking about handling that failure. An arbitrary monad m doesn't indicate anything about the failure modes present. ack! The user of a library is who should get to choose how to deal with the error case, not the library writer. I'd hate to give up such very common idioms as -- collect error messages from all failing parsers [ err | Left err - map parse xs] I don't see how you lose this one at all. -- look up a string transformed by a map in another map, failing if it -- is not in said other map. runIdentity $ Map.lookup (concatMap (`Map.lookup` map) xs) smap Suppose Map.lookup returns something in the Maybe monad. let lookup k m = fromJust $ Map.lookup k m in lookup (map (`lookup` m) xs) sm Not so hard. How about if Map.lookup is prepared to give us a string via the Either String monad and we want to throw an error: let lookup k m = either error id $ Map.lookup k m in lookup (map (`lookup` m) xs) sm If we had a bigger monadic context, it would be just as easy to lift the error up into that. let lookup k m = either (throwError . strMsg) return $ Map.lookup k m in do vs - mapM (`lookup` m) xs lookup vs sm Or finally, if Map.lookup uses the MonadError class, like it probably should: do vs - mapM (`Map.lookup` m) xs Map.lookup vs sm But note that this is *not* the Identity monad we're working in here. It's some MonadError, and as far as I'm concerned, that's quite different. Also, if Map.lookup was equipped to give us symbolic information about the error, we could extend this to that. With fail, all we get is a string. We'd know what's actually available from Map.lookup before we write any of this. It's important to note here that either (throwError . strMsg) return is a useful lifter in its own right, and should probably be extracted and put in the library. but the real power is when you combine monadic failure with combinators and monad transformers -- imagine some complicated function f x xs = runWriterT $ mapM (\x - foofunc x = tell) xs the great thing about this is it is transparent to failure! so you can build arbitrarily complicated transformers while still letting the user of 'f' decide what to do with failure. this is a great feature, if foofunc returned a data type, the writer of 'f' would be forced to deal with failure, and might (likely will) do something silly like call 'error'. I'm not sure I understand your point here. Why would the writer of f be any more forced to deal with failure if foofunc returned a specific type here? In fact, it must be at least typed in WriterT, so I'm not sure what you mean. The code would be identical regardless of whether the transformed monad was fixed or not, and the writer of f doesn't have to do anything. What I'm advocating is not the use of non-monadic case-style failure handling. I'm advocating the use of specific classes and types which
Re: [Haskell-cafe] haskell programming guidelines
On 24/02/06, John Meacham [EMAIL PROTECTED] wrote: On Fri, Feb 24, 2006 at 12:39:27PM -0500, Cale Gibbard wrote: I look at the above as generating a proof obligation for me as the programmer that the lookup will never fail, or at least the ability to convince myself. :) If you want to handle errors, you should actually handle them, not let your users get Irrefutable pattern failed messages. Also, if someone else later comes along and wants to catch that error, they have to either do it in IO, which can be fiddly if the error occurs deep in the evaluation of some structure, or they refactor your code so that it returns the error explicitly. Sure, irrefutable pattern matches are useful, but they shouldn't be used if you expect they'll ever fail. Ah, perhaps I wasn't clear. I don't ever expect these to fail. The reason I prefer irrefutable pattern matches to handwritten 'error' messages (at first) is so many months later when I introduce a subtle heisenbug I don't get a error: This shouldn't happen or worse error: Prelude.undefined but rather a nice error pointing right to the line number. anything I ever expect to fail for any reason other than a bug I put in a failing Monad with a suitably user digestable error message. So, I was comparing them to handwritten 'error' messages for announcing programming bugs. not handwritten 'error' messages for users to see (which really should be using 'fail' in a monad anyway). John Well, this is an issue. Perhaps a version of error which makes the line/column number available to its parameter would help... something along the lines of type SourcePos = (Integer, Integer) -- possibly a data/newtype with a nicer Show instance errorPos :: (SourcePos - String) - a This would give all the benefits normally acquired from the expansion of the syntax sugar while allowing you to additionally add any extra messages you'd like. Further, you'd not be required to work in, say the identity monad, in order to get line number messages for failures (though in GHC at least, irrefutable pattern match failures in lambdas and let also get line numbered). I'm actually really against the inclusion of fail in the Monad class, so finding a reasonable replacement for any constructive uses it might have had is important to me. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] PrefixMap: code review request
On 27/02/06, David F. Place [EMAIL PROTECTED] wrote: On Feb 27, 2006, at 5:54 PM, Brian Hulley wrote: there is a parse error (using ghc) at the line beginning with result'. This binding doesn't line up with anything. Also the second 'where' is dangerously close to the column started by the 'f' after the first 'where' (may not be noticeable in this email due to whatever font it is being displayed in but it's only indented by one space) which makes it a bit tricky to read. Whoops, that's noise in the transmission. In my original file and my original email, it is indented correctly. As for other indentation issues, I always use whatever emacs suggests. Is that not a good strategy? I find that I often really dislike emacs' choices for indenting -- so much so that I had to turn off smart indenting altogether to avoid holding down the spacebar all the time (of course it remaps tab). The simple indenter gives all the possible sane selections for lining things up and lets you indent further if you want. I'd normally go with the first of Brian's suggestions for laying out an if expression (condition on one line, then and else aligned at a deeper depth right after it), unless the condition was really long, in which case I might switch to the second to get a little more space on that first line. I don't personally worry about tab width, as long as things line up in any given function definition. I usually use 4 spaces when not thinking about it, but sometimes 3, or even 2, or far more, if it makes things look better. Most editors are quite good at putting the cursor at the same indent level on the following line, and this is all the help you tend to need in this regard when editing a file. (Random musing: It would be neat to have an editor which dropped subtle vertical bars under the start of blocks, to make it easier to visually apply the layout rule, and see when you haven't indented far enough, and the block is broken.) Always tell your editor to leave tabs out of your files. There are very few Haskell programmers who actually leave tabs in their source, and mixing spaces and tabs can be deadly. You can set this in emacs by adding (setq-default indent-tabs-mode nil) to your .emacs if you haven't already done so. If you also use vim, set expandtab is the line you want. :) - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskell programming guidelines
On 27/02/06, John Meacham [EMAIL PROTECTED] wrote: On Mon, Feb 27, 2006 at 10:57:17PM -0500, Cale Gibbard wrote: Well, this is an issue. Perhaps a version of error which makes the line/column number available to its parameter would help... something along the lines of type SourcePos = (Integer, Integer) -- possibly a data/newtype with a nicer Show instance errorPos :: (SourcePos - String) - a Yes, this is what jhc's SRCLOC_ANNOTATE addreses, more or less. This would give all the benefits normally acquired from the expansion of the syntax sugar while allowing you to additionally add any extra messages you'd like. Further, you'd not be required to work in, say the identity monad, in order to get line number messages for failures (though in GHC at least, irrefutable pattern match failures in lambdas and let also get line numbered). Well, the benefit of the Identity monad is so that the user of a routine can choose to recover gracefully by using a different monad, you only use the Identity monad when you are making a choice to bottom out on errors. using 'error' directly is not an option in said cases because it would take away the ability of the user of a routine to catch errors properly. error should only be used for reporting bugs that should never happen, not user visible failure. I'd argue that it would be better for the user to simply catch the value returned which indicates error explicitly, and throw the error themselves. This indicates that they have put thought into the fact that the function may fail. The writer of a library shouldn't decide how (non-buggy) failure should be handled, the user of it should. Right, which is why minimal types for expressing the failure should be used, and the user should convert from those types to whatever larger environment they have in mind. If your function is simply partial, use Maybe, if you want to report error strings, use Either String. These types easily lift into any monad which support similar functionality. It also gives the users of your library more information about the exact way in which your functions may fail, just by looking at the type signatures, and gets them thinking about handling that failure. An arbitrary monad m doesn't indicate anything about the failure modes present. I'm actually really against the inclusion of fail in the Monad class, so finding a reasonable replacement for any constructive uses it might have had is important to me. I know you keep saying this, We start with the exact same premises and goals, yet somehow come to the exact opposite conclusion. I have not quite figured out why. However, a quick survey shows that _every single_ monad defined in the standard and fptools libraries has an interesting non-error 'fail' method other than Identity, whose sole purpose is to turn 'fail's into errors. Separating out a MonadError with 'fail' seems rather odd as every monad will be an instance of it! (including Identity, since turning fails into errors is its main purpose) (the monads like 'Reader' and 'Writer' are actually just shorthand for ReaderT a Identity, the inner monad determines the failure mode) John Well, that means that Reader, Writer and State, and any monad based upon them or their transformers does not have a meaningful fail. IO also does not have an interesting fail. It also means that all custom monads based on state transformers, say, don't have interesting fails. This is a very large chunk of the monads which people use in everyday code! The List monad and Maybe monad have nice fails, and that's why they should be in MonadZero. I disagree that Identity, Reader, Writer, or State should be an instance of MonadError or MonadZero. They should simply not be used for that purpose. I'd like a monad hierarchy where if there is an instance of a class for a monad, then none of the methods of that class are identically bottom. It seems disingenuous to me to say that some type constructor implements certain functionality, and then implement it in a way which crashes the program. If you need failure in your monad, add it explicitly via a transformer, and if you use failure, you should express that via a class. Types and classes should be meaningful and informative about this sort of thing. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskell programming guidelines
On 20/02/06, John Meacham [EMAIL PROTECTED] wrote: There is a more straightforward way to get localized error messages rather than using 'maybe' and hand-writing an appropriate error, and that is to rely on irrefutable bindings. f x = ... y ... where Just y = Map.lookup x theMap now if the lookup fails you automatically get an error message pointing to the exact line number of the failure. or if the failure message of the routine is more important than the source location you can do f x = ... y ... where Identity y = Map.lookup x theMap it is anoying you have to make a choice between these two possibilities, but this can be mitigated with CPP magic or the SRCLOC_ANNOTATE pragma. John I look at the above as generating a proof obligation for me as the programmer that the lookup will never fail, or at least the ability to convince myself. :) If you want to handle errors, you should actually handle them, not let your users get Irrefutable pattern failed messages. Also, if someone else later comes along and wants to catch that error, they have to either do it in IO, which can be fiddly if the error occurs deep in the evaluation of some structure, or they refactor your code so that it returns the error explicitly. Sure, irrefutable pattern matches are useful, but they shouldn't be used if you expect they'll ever fail. - Cale ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe