Re: [Haskell-cafe] [ANN] lvish 1.0 -- successor to monad-par
Ryan Newton rrnew...@gmail.com writes: Hi all, I'm pleased to announce the release of our new parallel-programming library, LVish: hackage.haskell.org/package/lvish It provides a Par monad similar to the monad-par package, but generalizes the model to include data-structures other than single-assignment variables (IVars). For example, it has lock-free concurrent data structures for Sets and Maps, which are constrained to only grow monotonically during a given runPar (to retain determinism). This is based on work described in our upcoming POPL 2014 paper: Do you have any aidea why the Haddocks don't yet exist. If I recall correctly, under Hackage 1 the module names wouldn't be made links until Haddock generation had completed. Currently the lvish modules' point to non-existent URLs. Also, is there a publicly accessible repository where further development will take place? Cheers, - Ben pgppP5lQOaZx3.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Proposal: Pragma EXPORT
Ivan Lazar Miljenovic ivan.miljeno...@gmail.com writes: On 17 September 2013 09:35, Evan Laforge qdun...@gmail.com wrote: snip None of this is a big deal, but I'm curious about other's opinions on it. Are there strengths to the separate export list that I'm missing? I do like the actual summary aspect as you've noted, as I can at times be looking through the actual code rather than haddock documentation when exploring new code (or even trying to remember what I wrote in old code). The summary of functionality that the export list provides is a very nice feature that I often miss in other languages. That being said, it brings up a somewhat related issue that may become increasingly problematic with the rising use of libraries such as lens: exporting members defined by Template Haskell. While we have nice sugar for exporting all accessors of a record (MyRecord(..)), we have no way to do the same for analogous TH-generated members such as lenses. Instead, we require that the user laboriously list each member of the record, taking care not to forget any. One approach would be to simply allow TH to add exports as presented in Ticket #1475 [1]. I can't help but wonder if there's another way, however. One (questionable) option would be to allow globbing patterns in export lists. Another approach might be to introduce some notion of a name list which can appear in the export list. These lists could be built up by either user declarations in the source module or in Template Haskell splices and would serve as a way to group logically related exports. This would allow uses such as (excuse the terrible syntax), module HelloWorld ( namelist MyDataLenses , namelist ArithmeticOps ) where import Control.Lens data MyData = MyData { ... } makeLenses ''MyDataLenses -- makeLenses defines a namelist called MyDataLenses namelist ArithmeticOps (add) add = ... namelist ArithmeticOps (sub) sub = ... That being said, there are a lot of reasons why we wouldn't want to introduce such a mechanism, * we'd give up the comprehensive summary that the export list currently provides * haddock headings already provides a perfectly fine means for grouping logically related exports * it's hard to envision the implementation of such a feature without the introduction of new syntax * there are arguably few uses for such a mechanism beyond exporting TH constructs * you still have the work of solving the issues presented in #1475 Anyways, just a thought. Cheers, - Ben [1] http://ghc.haskell.org/trac/ghc/ticket/1475 pgpoddlH92BBj.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] name lists
Roman Cheplyaka r...@ro-che.info writes: * Ben Gamari bgamari.f...@gmail.com [2013-09-17 10:03:41-0400] Another approach might be to introduce some notion of a name list which can appear in the export list. These lists could be built up by either user declarations in the source module or in Template Haskell splices and would serve as a way to group logically related exports. This would allow uses such as (excuse the terrible syntax), Hi Ben, Isn't this subsumed by ordinary Haskell modules, barring the current compilers' limitation that modules are in 1-to-1 correspondence with files (and thus are somewhat heavy-weight)? E.g. the above could be structured as module MyDataLenses where data MyData = MyData { ... } makeLenses ''MyData module HelloWorld (module MyDataLenses, ...) where ... True. Unfortunately I've not seen much motion towards relaxing this limitation[1]. Cheers, - Ben [1] http://ghc.haskell.org/trac/ghc/ticket/2551 pgpjfzeZUOpNI.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Performance of delete-and-return-last-element
isn't this what zippers are for? b On Aug 30, 2013, at 1:04 PM, Clark Gaebel wrote: I don't think a really smart compiler can make that transformation. It looks like an exponential-time algorithm would be required, but I can't prove that. GHC definitely won't... For this specific example, though, I'd probably do: darle :: [a] - (a, [a]) darle xs = case reverse xs of [] - error darle: empty list (x:xs) - (x, reverse xs) - Clark On Fri, Aug 30, 2013 at 2:18 PM, Lucas Paul reilith...@gmail.com wrote: Suppose I need to get an element from a data structure, and also modify the data structure. For example, I might need to get and delete the last element of a list: darle xs = ((last xs), (rmlast xs)) where rmlast [_] = [] rmlast (y:ys) = y:(rmlast ys) There are probably other and better ways to write rmlast, but I want to focus on the fact that darle here, for lack of a better name off the top of my head, appears to traverse the list twice. Once to get the element, and once to remove it to produce a new list. This seems bad. Especially for large data structures, I don't want to be traversing twice to do what ought to be one operation. To fix it, I might be tempted to write something like: darle' [a] = (a, []) darle' (x:xs) = let (a, ys) = darle' xs in (a, (x:ys)) But this version has lost its elegance. It was also kind of harder to come up with, and for more complex data structures (like the binary search tree) the simpler expression is really desirable. Can a really smart compiler transform/optimize the first definition into something that traverses the data structure only once? Can GHC? - Lucas ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Compiler stops at SpecConstr optimization
On 30/08/2013, at 2:38 AM, Daniel Díaz Casanueva wrote: While hacking in one of my projects, one of my modules stopped to compile for apparently no reason. The compiler just freezes (like if it where in an infinite loop) while trying to compile that particular module. Since I had this problem I have been trying to reduce the problem as much as I could, and I came out with this small piece of code: module Blah (foo) where import Data.Vector (Vector) import qualified Data.Vector as V foo :: (a - a) - Vector a - Vector a foo f = V.fromList . V.foldl (\xs x - f x : xs) [] Probably an instance of this one: http://ghc.haskell.org/trac/ghc/ticket/5550 Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] enumerators: exception that can't be catched
This is partially guesswork, but the code to catchWSError looks dubious: catchWsError :: WebSockets p a - (SomeException - WebSockets p a) - WebSockets p a catchWsError act c = WebSockets $ do env - ask let it = peelWebSockets env $ act cit = peelWebSockets env . c lift $ it `E.catchError` citwhere peelWebSockets env = flip runReaderT env . unWebSockets Look at `cit`. It runs the recovery function, then hands the underlying Iteratee the existing environment. That's fine if `act` is at fault, but there are Iteratee- and IO-ish things in WebSocketsEnv---if one of `envSink` or `envSendBuilder` is causing the exception, it'll just get re-thrown after `E.catchError`. (I think. That's the guesswork part.) So check how `envSendBuilder` is built up, and see if there's a way it could throw an exception on client disconnect. On Tue, Aug 27, 2013 at 10:28 AM, Yuras Shumovich shumovi...@gmail.comwrote: Hello, I'm debugging an issue in websockets package, https://github.com/jaspervdj/websockets/issues/42 I'm not familiar with enumerator package (websockets are based on it), so I'm looking for help. The exception is throws inside enumSocket enumerator using throwError ( http://hackage.haskell.org/packages/archive/network-enumerator/0.1.5/doc/html/src/Network-Socket-Enumerator.html#enumSocket), but I can't catch it with catchError. It is propagated to run function: interactive: recv: resource vanished (Connection reset by peer) The question is: how is it possible? could it be a bug in enumerator package? Thanks, Yuras ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GHC flags: optghc
That's not a GHC flag; it's a haddock flag. Haddock (which, in case you're not familiar with it, is a program to generate documentation from Haskell source code) uses GHC, and the `optghc` flag lets you pass options to GHC when you invoke Haddock. See [the Haddock docs of the 6.12 era][1], on page 3. It's also entirely possible that some program besides Haddock uses a flag of the same name (for the same purpose, one would hope). [1]: http://www.haskell.org/ghc/docs/6.12.3/haddock.pdf 2013/8/23 jabolo...@google.com Hi, I am using GHC version 6.12.1. What is optghc ? I can't find that information anywhere... Thanks, Jose -- Jose Antonio Lopes Ganeti Engineering Google Germany GmbH Dienerstr. 12, 80331, München Registergericht und -nummer: Hamburg, HRB 86891 Sitz der Gesellschaft: Hamburg Geschäftsführer: Graham Law, Christine Elizabeth Flores Steuernummer: 48/725/00206 Umsatzsteueridentifikationsnummer: DE813741370 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Ideas on a fast and tidy CSV library
Justin Paston-Cooper paston.coo...@gmail.com writes: Dear All, Recently I have been doing a lot of CSV processing. I initially tried to use the Data.Csv (cassava) library provided on Hackage, but I found this to still be too slow for my needs. In the meantime I have reverted to hacking something together in C, but I have been left wondering whether a tidy solution might be possible to implement in Haskell. Have you tried profiling your cassava implementation? In my experience I've found it's quite quick. If you have an example of a slow path I'm sure Johan (cc'd) would like to know about it. I would like to build a library that satisfies the following: 1) Run a function f :: a_1 - ... - a_n - m (Maybe (b_1, ..., b_n)), with m some monad and the as and bs being input and output. 2) Be able to specify a maximum record string length and output record string length, so that the string buffers used for reading and outputting lines can be reused, preventing the need for allocating new strings for each record. 3) Allocate only once, the memory where the parsed input values, and output values are put. Ultimately this could be rather tricky to enforce. Haskell code generally does a lot of allocation and the RTS is well optimized to handle this. I've often found that trying to shoehorn a non-idiomatic optimal imperative approach into Haskell produces worse performance than the more readable, idiomatic approach. I understand this leaves many of your questions unanswered, but I'd give the idiomatic approach a bit more time before trying to coerce C into Haskell. Profile, see where the hotspots are and optimize appropriately. If the profile has you flummoxed, the lists and #haskell are always willing to help given the time. Cheers, - Ben pgpp3Fd9RzpaG.pgp Description: PGP signature ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] homotopy type theory for amateurs
hello cafe -- i have created the mailing list hott-amate...@googlegroups.com https://groups.google.com/d/forum/hott-amateurs (perhaps forever consigning myself to spam folders everywhere.) if you are interested in joining this reading group, you can do so there. nothing has been decided yet on how it is to be run. best, b On Jun 25, 2013, at 1:03 PM, Ben wrote: hello cafe -- by now i'm sure you have heard that the homotopy type theory folks have just written up a free introductory book on their project. http://homotopytypetheory.org/2013/06/20/the-hott-book/ gabriel gonzalez and i are starting up a small reading group for the book. the level of study will be amateur, though i have high hopes for rigor and thoroughness. personally, i know hardly any type theory or logic, and the last time i thought about homotopy theory in any seriousness was years ago. thankfully the book looks very accessible. being time-constrained we were going to do it mostly over email, maybe starting a google group or other mailing list, but we might meet as well, say once a month in san francisco (where we both are.) if you're interested in joining us, send me an email (midfield at gmail) and i'll try to keep you informed of any developments. best, ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] homotopy type theory for amateurs
hello cafe -- by now i'm sure you have heard that the homotopy type theory folks have just written up a free introductory book on their project. http://homotopytypetheory.org/2013/06/20/the-hott-book/ gabriel gonzalez and i are starting up a small reading group for the book. the level of study will be amateur, though i have high hopes for rigor and thoroughness. personally, i know hardly any type theory or logic, and the last time i thought about homotopy theory in any seriousness was years ago. thankfully the book looks very accessible. being time-constrained we were going to do it mostly over email, maybe starting a google group or other mailing list, but we might meet as well, say once a month in san francisco (where we both are.) if you're interested in joining us, send me an email (midfield at gmail) and i'll try to keep you informed of any developments. best, ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Array, Vector, Bytestring
Artyom Kazak y...@artyom.me writes: silvio silvio.fris...@gmail.com писал(а) в своём письме Mon, 03 Jun 2013 22:16:08 +0300: Hi everyone, Every time I want to use an array in Haskell, I find myself having to look up in the doc how they are used, which exactly are the modules I have to import ... and I am a bit tired of staring at type signatures for 10 minutes to figure out how these arrays work every time I use them (It's even worse when you have to write the signatures). I wonder how other people perceive this issue and what possible solutions could be. Recently I’ve started to perceive this issue as “hooray, we have lenses now, a generic interface for all the different messy stuff we have”. But yes, the inability to have One Common API for All Data Structures is bothering me as well. Why do we need so many different implementations of the same thing? In the ghc libraries alone we have a vector, array and bytestring package all of which do the same thing, as demonstrated for instance by the vector-bytestring package. To make matters worse, the haskell 2010 standard has includes a watered down version of array. Indeed. What we need is `text` for strings (and stop using `bytestring`) and reworked `vector` for arrays (with added code from `StorableVector` — basically a lazy ByteString-like chunked array). To be perfectly clear, ByteString and Text target much different use-cases and are hardly interchangeable. While ByteString is, as the name suggests, a string of bytes, Text is a string of characters in a Unicode encoding. When you are talking about unstructured binary data, you should most certainly be using ByteString. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] mapFst and mapSnd
On Tue, May 28, 2013 at 1:54 AM, Dominique Devriese dominique.devri...@cs.kuleuven.be wrote: Hi all, I often find myself needing the following definitions: mapPair :: (a - b) - (c - d) - (a,c) - (b,d) mapPair f g (x,y) = (f x, g y) mapFst :: (a - b) - (a,c) - (b,c) mapFst f = mapPair f id mapSnd :: (b - c) - (a,b) - (a,c) mapSnd = mapPair id But they seem missing from the prelude and Hoogle or Hayoo only turn up versions of them in packages like scion or fgl. Has anyone else felt the need for these functions? Am I missing some generalisation of them perhaps? One generalization of them is to lenses. For example `lens` has both, _1, _2, such that mapPair = over both, mapFst = over _1, etc., but you can also get fst = view _1, set _2 = \y' (x,_) - (x,y'), and so on. (Since both refers to two elements, you end up with view both = \(x,y) - mappend x y.) The types you end up with are simple generalizations of mapFoo, with just an extra Functor or Applicative (think mapMFoo): both :: Applicative f = (a - f b) - (a,a) - f (b,b) both f (x,y) = (,) $ f x * g y _2 :: Functor f = (a - f b) - (e,a) - f (e,b) _2 f (x,y) = (,) x $ f y With an appropriate choice of f you can get many useful functions. Shachaf ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] mapFst and mapSnd
On Thu, May 30, 2013 at 7:12 PM, Shachaf Ben-Kiki shac...@gmail.com wrote: One generalization of them is to lenses. For example `lens` has both, _1, _2, such that mapPair = over both, mapFst = over _1, etc., but you can also get fst = view _1, set _2 = \y' (x,_) - (x,y'), and so on. (Since both refers to two elements, you end up with view both = \(x,y) - mappend x y.) The types you end up with are simple generalizations of mapFoo, with just an extra Functor or Applicative (think mapMFoo): both :: Applicative f = (a - f b) - (a,a) - f (b,b) both f (x,y) = (,) $ f x * g y _2 :: Functor f = (a - f b) - (e,a) - f (e,b) _2 f (x,y) = (,) x $ f y With an appropriate choice of f you can get many useful functions. I spoke too quickly -- your mapPair is something different. Indeed bimap (or (***), if you prefer base) is the place to find it -- lenses don't really fit here. My both is for mapping one function over both elements. Shachaf ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [haskell.org Google Summer of Code 2013] Approved Projects
On 29/05/2013, at 1:11 AM, Edward Kmett wrote: This unfortunately means, that we can't really show the unaccepted proposals with information about how to avoid getting your proposal rejected. You can if you rewrite the key points of proposal to retain the overall message, but remove identifying information. I think it would be helpful to write up some of the general reasons for projects being rejected. I tried to do this for Haskell experience reports, on the Haskell Symposium experience report advice page. http://www.haskell.org/haskellwiki/HaskellSymposium/ExperienceReports I'd imagine you could write up some common proposal / rejection / advice tuples like: Proposal: I want to write a MMORPG in Haskell, because this would be a good demonstration for Haskell in a large real world project. We can use this as a platform to develop the networking library infrastructure. Rejection: This project is much too big, and the production of a MMORPG wouldn't benefit the community as a whole. Advice: If you know of specific problems in the networking library infrastructure, then focus on those, using specific examples of where people have tried to do something and failed. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Teaching FP with Haskell
Helium seems interesting, but the code is a little stale, no? The last updates seem to be from 2008-2009. I couldn't get it to build with ghc 7.6.3, not that I tried too terribly hard. On Tue, May 21, 2013 at 6:07 AM, Andrew Butterfield andrew.butterfi...@scss.tcd.ie wrote: Rustom, you should look at Helium - http://www.cs.uu.nl/wiki/bin/view/Helium/WebHome Andrew. On 21 May 2013, at 10:55, Rustom Mody wrote: We are offering a MOOC on haskell : https://moocfellowship.org/submissions/the-dance-of-functional-programming-languaging-with-haskell-and-python Full Announcement on beginners list : http://www.haskell.org/pipermail/beginners/2013-May/012013.html One question that I have been grappling with in this regard: How to run ghc in lightweight/beginner mode? 2 examples of what I mean: 1. gofer used to come with an alternative standard prelude -- 'simple.pre' Using this, gofer would show many of the type-class based errors as simple (non-type-class based) errors. This was very useful for us teachers to help noobs start off without intimidating them. 2. Racket comes with a couple of levels. The easier numbers were not completely consistent with scheme semantics, but was gentle to beginners Any thoughts/inputs on this will be welcomed Rusi ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe Andrew Butterfield Tel: +353-1-896-2517 Fax: +353-1-677-2204 Lero@TCD, Head of Foundations Methods Research Group Director of Teaching and Learning - Undergraduate, School of Computer Science and Statistics, Room G.39, O'Reilly Institute, Trinity College, University of Dublin http://www.scss.tcd.ie/Andrew.Butterfield/ ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Backward compatibility
You might want to check out FPCompletehttps://www.fpcomplete.com/page/about-us, if you haven't already. They're far more focused on making it easy for organizations to adopt Haskell than the community can be. As they say: Where the open-source process is not sufficient to meet commercial adoption needs, we provide the missing pieces. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Backward compatibility
What pray tell are those missing pieces? Aren't they mostly building a browser based ide plus doing training courses ? Sure, and I believe they plan to have that browser-based IDE talk to a virtual server, with a compiler and set of libraries they maintain. That'd solve Adrian's problems, no? So long as he can bring himself to use Yesod over WASH? Perhaps more importantly, they're well-spoken and business-savvy, and they can persuasively promise that they'll make a risk-averse corporation's (overblown) worries go away. If he's in a management battle, he ought to know where to hire some mercenaries. On Sat, May 4, 2013 at 2:03 PM, Carter Schonwald carter.schonw...@gmail.com wrote: What pray tell are those missing pieces? Aren't they mostly building a browser based ide plus doing training courses ? On May 4, 2013 1:42 PM, Ben Doyle benjamin.peter.do...@gmail.com wrote: You might want to check out FPCompletehttps://www.fpcomplete.com/page/about-us, if you haven't already. They're far more focused on making it easy for organizations to adopt Haskell than the community can be. As they say: Where the open-source process is not sufficient to meet commercial adoption needs, we provide the missing pieces. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project
sorry, i was only trying to make a helpful suggestion! just to clarify: i'm not championing asciitext (or any other format) -- i only heard about it recently in a comment on http://www.codinghorror.com/blog/2012/10/the-future-of-markdown.html i checked it out and it sounded cool, so i thought it'd be a helpful pointer to whomever is working on new haddock -- they are of course welcome to ignore it. totally understand that overmuch debate is not helpful (though i'm not sure it's fair to call it bikeshedding, since it is a primary feature of the proposed project!) best, ben On Apr 27, 2013, at 2:02 PM, Bryan O'Sullivan wrote: On Sat, Apr 27, 2013 at 1:47 PM, Ben midfi...@gmail.com wrote: asciidoc has been mentioned a few times in comments, i think it's worth looking at. This is the problem I was afraid of: for every markup syntax under the sun, someone will come along to champion it. The choice of one or N syntaxes is ultimately up to the discretion of the student, guided by their mentor. It is in our collective interest to avoid prolonging a bikeshed discussion on this, as a long inconclusive discussion risks dissuading any sensible student or mentor from wanting to pursue the project in the first place. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Markdown extension for Haddock as a GSoC project
asciidoc has been mentioned a few times in comments, i think it's worth looking at. * mature, over 10 years old (predates markdown i think), not just another markdown clone * human readable, but it has a lot of advanced features including mathematical formulas. * github supports it (they were sufficiently impressed with it to make a ruby implementation called asciidoctor) * several o'reilly books have been written in it, and the git documentation is written in it. roughly, asciidoc is to docbook as markdown is to html. i'm no expert in this area but it seems to be a good alternative. http://asciidoc.org/ http://asciidoctor.org/docs/what-is-asciidoc-why-use-it/ best, ben On Apr 27, 2013, at 11:06 AM, Bryan O'Sullivan wrote: On Sat, Apr 27, 2013 at 2:23 AM, Alistair Bayley alist...@abayley.org wrote: How's about Creole? http://wikicreole.org/ Found it via this: http://www.wilfred.me.uk/blog/2012/07/30/why-markdown-is-not-my-favourite-language/ If you go with Markdown, I vote for one of the Pandoc implementations, probably Pandoc (strict): http://johnmacfarlane.net/babelmark2/ (at least then we're not creating yet another standard...) Probably the best way to deal with this is by sidestepping it: make the support for alternative syntaxes as modular as possible, and choose two to start out with in order to get a reasonable shot at constructing a suitable API. I think it would be a shame to bikeshed on which specific syntaxes to support, when a lot of productive energy could more usefully go into actually getting the work done. Better to say prefer a different markup language? code to this API, then submit a patch! ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
On 25/04/2013, at 3:47 AM, Duncan Coutts wrote: It looks like fold and unfold fusion systems have dual limitations: fold-based fusion cannot handle zip style functions, while unfold-based fusion cannot handle unzip style functions. That is fold-based cannot consume multiple inputs, while unfold-based cannot produce multiple outputs. Yes. This is a general property of data-flow programs and not just compilation via Data.Vector style co-inductive stream fusion, or a property of fold/unfold/hylo fusion. Consider these general definitions of streams and costreams. -- A stream must always produce an element. type Stream a = IO a -- A costream must always consume an element. type CoStream a = a - IO () And operators on them (writing S for Stream and C for CoStream). -- Versions of map. map :: (a - b) - S a - S b(ok) comap :: (a - b) - C b - C a(ok) -- Versions of unzip. unzip :: S (a, b) - (S a, S b)(bad) counzip :: C a - C b - C (a, b)(ok) unzipc :: S (a, b) - C b - S a(ok) -- Versions of zip. zip :: S a - S b - S (a, b)(ok) cozip :: C (a, b) - (C a, C b)(bad) zipc:: C (a, b) - S a - C b(ok) The operators marked (ok) can be implemented without buffering data, while the combinators marked (bad) may need an arbitrary sized buffer. Starting with 'unzip', suppose we pull elements from the first component of the result (the (S a)) but not the second component (the (S b)). To provide these 'a' elements, 'unzip' must pull tuples from its source stream (S (a, b)) and buffer the 'b' part until someone pulls from the (S b). Dually, with 'cozip', suppose we push elements into the first component of the result (the (C a)). The implementation must buffer them until someone pushes the corresponding element into the (C b), only then can it push the whole tuple into the source (C (a, b)) costream. The two combinators unzipc and zipc are hybrids: For 'unzipc', if we pull an element from the (S a), then the implementation can pull a whole (a, b) tuple from the source (S (a, b)) and then get rid of the 'b' part by pushing it into the (C b). The fact that it can get rid of the 'b' part means it doesn't need a buffer. Similarly, for 'zipc', if we push a 'b' into the (C b) then the implementation can pull the corresponding 'a' part from the (S a) and then push the whole (a, b) tuple into the C (a, b). The fact that it can get the corresponding 'a' means it doesn't need a buffer. I've got some hand drawn diagrams of this if anyone wants them (mail me), but none of it helps implement 'unzip' for streams or 'cozip' for costreams. I'll be interested to see in more detail the approach that Ben is talking about. As Ben says, intuitively the problem is that when you've got multiple outputs so you need to make sure that someone is consuming them and that that consumption is appropriately synchronised so that you don't have to buffer (buffering would almost certainly eliminate the gains from fusion). That might be possible if ultimately the multiple outputs are combined again in some way, so that overall you still have a single consumer, that can be turned into a single lazy or eager loop. At least for high performance applications, I think we've reached the limit of what short-cut fusion approaches can provide. By short cut fusion, I mean crafting a special source program so that the inliner + simplifier + constructor specialisation transform can crunch down the intermediate code into a nice loop. Geoff Mainland's recent paper extended stream fusion with support for SIMD operations, but I don't think stream fusion can ever be made to fuse programs with unzip/cozip-like operators properly. This is a serious problem for DPH, because the DPH vectoriser naturally produces code that contains these operators. I'm currently working on Repa 4, which will include a GHC plugin that hijacks the intermediate GHC core code and performs the transformation described in Richard Water's paper Automatic transformation of series expressions into loops. The plugin will apply to stream programs, but not affect the existing fusion mechanism via delayed arrays. I'm using a cut down 'clock calculus' from work on synchronous data-flow languages to guarantee that all outputs from an unzip operation are consumed in lock-step. Programs that don't do this won't be well typed. Forcing synchronicity guarantees that Waters's transform will apply to the program. The Repa plugin will also do proper SIMD vectorisation for stream programs, producing the SIMD primops that Geoff recently added. Along the way it will brutally convert all operations on boxed/lifted numeric data to their unboxed equivalents, because I am sick of adding bang patterns to every single function parameter in Repa programs. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
On 26/04/2013, at 2:15 PM, Johan Tibell wrote: Hi Ben, On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote: The Repa plugin will also do proper SIMD vectorisation for stream programs, producing the SIMD primops that Geoff recently added. Along the way it will brutally convert all operations on boxed/lifted numeric data to their unboxed equivalents, because I am sick of adding bang patterns to every single function parameter in Repa programs. How far is this plugin from being usable to implement a {-# LANGUAGE Strict #-} pragma for treating a single module as if Haskell was strict? There is already one that does this, but I haven't used it. http://hackage.haskell.org/package/strict-ghc-plugin It's one of the demo plugins, though you need to mark individual functions rather than the whole module (which would be straightforward to add). The Repa plugin is only supposed to munge functions using the Repa library, rather than the whole module. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
On 22/04/2013, at 5:27 PM, Edward Z. Yang wrote: So, if I understand correctly, you're using the online/offline criterion to resolve non-directed cycles in pipelines? (I couldn't tell how the Shivers paper was related.) The online criteria guarantees that the stream operator does not need to buffer an unbounded amount of data (I think). I'm not sure what you mean by resolve non-directed cycles. The Shivers paper describes the same basic approach of splitting the code for a stream operator in to parts that run before the loop/for each element of a loop/after the loop etc. Splitting multiple operators this way and then merging the parts into a single loop provides the concurrency required by the description in John Hughes's thesis. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
On 22/04/2013, at 11:07 , Edward Z. Yang ezy...@mit.edu wrote: Hello all, (cc'd stream fusion paper authors) I noticed that the current implementation of stream fusion does not support multiple-return stream combinators, e.g. break :: (a - Bool) - [a] - ([a], [a]). I thought a little bit about how might one go about implement this, but the problem seems nontrivial. (One possibility is to extend the definition of Step to support multiple return, but the details are a mess!) Nor, as far as I can tell, does the paper give any treatment of the subject. Has anyone thought about this subject in some detail? I've spent the last few months fighting this exact problem. The example you state is one instance of a more general limitation. Stream fusion (and most other short-cut fusion approaches) cannot fuse a producer into multiple consumers. The fusion systems don't support any unzip-like function, where elements from the input stream end up in multiple output streams. For example: unzip :: [(a, b)] - ([a], [b]) dup :: [a] - ([a], [a]) The general problem is that if elements of one output stream are demanded before the other, then the stream combinator must buffer elements until they are demanded by both outputs. John Hughes described this problem in his thesis, and gave an informal proof that it cannot be solved without some form of concurrency -- meaning the evaluation of the two consumers must be interleaved. I've got a solution for this problem and it will form the basis of Repa 4, which I'm hoping to finish a paper about for the upcoming Haskell Symposium. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
On 22/04/2013, at 12:23 , Edward Z. Yang ezy...@mit.edu wrote: I've got a solution for this problem and it will form the basis of Repa 4, which I'm hoping to finish a paper about for the upcoming Haskell Symposium. Sounds great! You should forward me a preprint when you have something in presentable shape. I suppose before then, I should look at repa-head/repa-stream to figure out what the details are? The basic approach is already described in: Automatic Transformation of Series Expressions into Loops Richard Waters, TOPLAS 1991 The Anatomy of a Loop Olin Shivers, ICFP 2005 The contribution of the HS paper is planning to be: 1) How to extend the approach to the combinators we need for DPH 2) How to package it nicely into a Haskell library. I'm still working on the above... Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] multivariate normal distribution in Haskell?
Bas de Haas w.b.deh...@uu.nl writes: Dear List, I’m implementing a probabilistic model for recognising musical chords in Haskell. This model relies on a multivariate normal distribution. I’ve been searching the internet and mainly hackage for a Haskell library to do this for me, but so far I’ve been unsuccessful. What I’m looking for is a Haskell function that does exactly what the mvnpdf function in matlab does: http://www.mathworks.nl/help/stats/multivariate-normal-distribution.html Does anyone know a library that can help me out? As you are likely aware, the trouble with the multivariate normal is the required inversion of the covariance. If you make assumptions concerning the nature of the covariance (e.g. force it to be diagonal or low dimensional) the problem gets much easier. To treat the general, high dimensional case, you pretty much need a linear algebra library (e.g. HMatrix) to perform the inversion (and determinant for proper normalization). Otherwise, implementing the function given the inverse is quite straightforward. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Type level natural numbers
Mateusz Kowalczyk fuuze...@fuuzetsu.co.uk writes: About two weeks ago we got an email (at ghc-users) mentioning that comparing to 7.6, 7.7.x snapshot would contain (amongst other things), type level natural numbers. I believe the package used is at [1]. Can someone explain what use is such package in Haskell? I understand uses in a language such as Agda where we can provide proofs about a type and then use that to perform computations using the type system (such as guaranteeing that concatenating two vectors together will give a new one with the length of the two initial vectors combine) however as far as I can tell, this is not the case in Haskell (although I don't want to say ?impossible? and have Oleg jump me). It most certainly will be possible to do type level arithmetic. For one use-case, see Linear.V from the linear library [1]. The DataKinds work is already available in 7.6, allowing one to use type level naturals, but the type checker is unable to unify arithmetic operations. Cheers, - Ben [1] http://hackage.haskell.org/packages/archive/linear/1.1.1/doc/html/Linear-V.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock
Johan Tibell johan.tib...@gmail.com writes: On Thu, Apr 4, 2013 at 9:49 AM, Johan Tibell johan.tib...@gmail.com wrote: I suggest that we implement an alternative haddock syntax that's a superset of Markdown. It's a superset in the sense that we still want to support linkifying Haskell identifiers, etc. Modules that want to use the new syntax (which will probably be incompatible with the current syntax) can set: {-# HADDOCK Markdown #-} Let me briefly argue for why I suggested Markdown instead of the many other markup languages out there. Markdown has won. Look at all the big programming sites out there, from GitHub to StackOverflow, they all use a superset of Markdown. It did so mostly (in my opinion) because it codified the formatting style people were already using in emails and because it was pragmatic enough to include HTML as an escape hatch. For what it's worth, I think Markdown is a fine choice for very much the same reason. RST has some nice properties (especially for documenting Python), but Markdown is much more common. Moreover, I've always found RST's linkification syntax a bit awkward. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Lazy object deserialization
that's too bad, i used lazy deserialization for an external sort thing i did aeons ago. http://www.haskell.org/pipermail/haskell-cafe/2007-July/029156.html that was an amusing exercise in lazy IO. these days it's probably better off doing something with pipes et al instead of unsafeInterleaveIO. b On Mar 13, 2013, at 2:54 PM, Scott Lawrence wrote: I tried it, but it still goes and reads the whole list. Looking at the `binary` package source code it seems that strict evaluation is hard-coded in a few places, presumably for performance reasons. It also seems to necessarily read the bytestring sequentially, so complex tree-like data structures would presumably encounter problems even if it worked for a list. Ah well. As long as I'm not duplicating someone else's work, I'm more than happy to go at this from scratch. On Wed, 13 Mar 2013, Jeff Shaw wrote: On 3/13/2013 12:15 AM, Scott Lawrence wrote: Hey all, All the object serialization/deserialization libraries I could find (pretty much just binary and cereal) seem to be strict with respect to the actual data being serialized. In particular, if I've serialized a large [Int] to a file, and I want to get the first element, it seems I have no choice but to deserialize the entire data structure. This is obviously an issue for large data sets. There are obvious workarounds (explicitly fetch elements from the database instead of relying on unsafeInterleaveIO to deal with it all magically), but it seems like it should be possible to build a cereal-like library that allows proper lazy deserialization. Does it exist, and I've just missed it? Thanks, I haven't tested this, but I suspect something like this could give you lazy binary serialization and deserialization. It's not tail recursive, though. newtype LazyBinaryList a = LazyBinaryList [a] instance Binary a = LazyBinaryList a where put (LazyBinaryList []) = putWord8 0 put (LazyBinaryList (x:xs)) = putWord8 1 put x put (LazyBinaryList xs) get = do t - getWord8 case t of 0 - return (LazyBinaryList []) 1 - do x - get (LazyBinaryList xs) - get return $ LazyBinaryList (x:xs) -- Scott Lawrence ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Open-source projects for beginning Haskell students?
On Mar 11, 2013, at 11:26 AM, Jason Dagit wrote: Myself and several of my friends would find it useful to have a plotting library that we can use from ghci to quickly/easily visualize data. Especially if that data is part of a simulation we are toying with. Therefore, this proposal is for: A gnuplot-, matlab- or plotinum-like plotting API (that uses diagrams as the backend?). The things to emphasize: * Easy to install: No gtk2hs requirement. Preferably just pure haskell code and similar for any dependencies. Must be cross platform. * Frontend: graphs should be easy to construct; customizability is not as important * Backend: options for generating static images are nice, but for the use case we have in mind also being able to render in a window from ghci is very valuable. (this could imply something as purely rendering to JuicyPixels and I could write the rendering code) * What I would hope from you is a willingness to exchange email and/or chat with the student(s) over the course of the project, to give them a bit of guidance/mentoring. I am certainly willing to help on that front, but of course I probably don't know much about your particular project. I am willing/able to take on the mentoring aspect :) i second this, but with a different emphasis. i would like a ggplot2-type DSL for generating graphs, for data analysis and exploration. i agree with : * it would be great to have no gtk2hs / cairo requirement. (i guess this means text rendering in the diagrams-svg backend needs to be solved.) i guess in the near-term, this is less important to me -- having a proper plotting DSL at all is an important start. * frontend : graphs should be easy to construct, but having some flexibility is important. the application here is being able to explore statistical data, with slicing, grouping, highlighting, faceting, etc. * backend : static images are enough for me, interactive is a plus. most importantly : it should be fast enough to work pleasantly with large datasets. ggplot2 is pretty awesome but kills my machine, routinely. i would be willing to mentor, but i'm not an expert enough i think! best, ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Haskell + RankNTypes + (forall p. p Char - p Bool) sound?
I was trying to figure out a way to write absurd :: (forall p. p Char - p Bool) - Void using only rank-n types. Someone suggested that Haskell with RankNTypes and a magic primitive of type (forall p. p Char - p Bool) might be sound (disregarding the normal ways to get ⊥, of course). Is that true? Given either TypeFamilies or GADTs, you can write absurd. But it doesn't seem like you can write it with just RankNTypes. (This is related to GeneralizedNewtypeDeriving, which is more or less a version of that magic primitive.) This seems like something that GADTs (/TypeFamilies) give you over Leibniz equality: You can write data Foo a where FooA :: Foo Char FooB :: Void - Foo Bool foo :: Foo Bool - Void foo (FooB x) = x Without any warnings. On the other hand data Bar a = BarA (Is a Char) | BarB (Is a Bool) Void bar :: Bar Bool - Void bar (BarB _ x) = x bar (BarA w) = -- ??? Doesn't seem possible. If it's indeed impossible, what's the minimal extension you would need to add on top of RankNTypes to make it work? GADTs seems way too big. Shachaf ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Nomyx 0.1 beta, the game where you can change the rules
On 27/02/2013, at 10:28 , Corentin Dupont corentin.dup...@gmail.com wrote: Hello everybody! I am very happy to announce the beta release [1] of Nomyx, the only game where You can change the rules. Don't forget 1KBWC: http://www.corngolem.com/1kbwc/ Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] haskell build phase is very slow
yi is pretty heavy, as these things go. So it's not too surprising that it's taking a while. GHC does try to recompile as little as possible...but as little as possible can be quite a lot. Inlining, and other optimizations GHC performs, makes the recompilation checker's job tricky; see [1]. Generally if you change a file you'll need to recompile its dependencies, and *their* dependencies, and so on. If you're coding along and just need a typecheck, ghci is your friend. Specifically, the :reload command tends to be fast. (You'll need to :load yourFile.hs the first time, of course.) You might also see if yi's -fhacking flag is helpful. It looks like it might be relevant, though I don't know either yi or your use case well enough to say for sure. General advice on speeding compilation is here: [2]. Most of it isn't all that relevant to you at the moment, since you're hacking on someone else's package. But always good to know. Best of luck, Ben [1]: http://www.haskell.org/ghc/docs/latest/html/users_guide/separate-compilation.html#recomp [2]: http://www.haskell.org/ghc/docs/latest/html/users_guide/sooner-faster-quicker.html#sooner ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] quotRem and divMod
On Mon, Jan 28, 2013 at 4:27 PM, Artyom Kazak artyom.ka...@gmail.com wrote: Hi! I’ve always thought that `quotRem` is faster than `quot` + `rem`, since both `quot` and `rem` are just wrappers that compute both the quotient and the remainder and then just throw one out. However, today I looked into the implementation of `quotRem` for `Int32` and found out that it’s not true: quotRem x@(I32# x#) y@(I32# y#) | y == 0 = divZeroError | x == minBound y == (-1) = overflowError | otherwise = (I32# (narrow32Int# (x# `quotInt#` y#)), I32# (narrow32Int# (x# `remInt#` y#))) Why? The `DIV` instruction computes both, doesn’t it? And yet it’s being performed twice here. Couldn’t one of the experts clarify this bit? That code is from base 4.5. Here's base 4.6: quotRem x@(I32# x#) y@(I32# y#) | y == 0 = divZeroError -- Note [Order of tests] | y == (-1) x == minBound = (overflowError, 0) | otherwise = case x# `quotRemInt#` y# of (# q, r #) - (I32# (narrow32Int# q), I32# (narrow32Int# r)) So it looks like it was improved in GHC 7.6. In particular, by this commit: http://www.haskell.org/pipermail/cvs-libraries/2012-February/014880.html Shachaf ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Undo records
I think acid-state (http://hackage.haskell.org/package/acid-state) might do what you want, at least in broad strokes. It uses a durable transaction log to store query and update events. As far as I know, the interface to the library doesn't expose an undo/rollback function, so you'd have a bit of work to do to extend it to your use case. But the core functionality to make it possible should be there. Can you use ghc extensions aside from Template Haskell? Template Haskell you can do without with acid-state, but without GADTs and so on you'll have problems. On Sun, Jan 6, 2013 at 12:01 PM, Casey Basichis caseybasic...@gmail.comwrote: Hi, I am still getting a hang of Haskell. Sorry if the answer is obvious. What sorts of packages and approaches should I be looking at if I was looking to store something like an Undo stack into a database. Each table would refer to a function. Each records input and outputs would specify both a table ID and record ID. The records would also have a data and a Process ID to associate all functions to a specific process and give them an order. No records are ever deleted. Rolling something back is instead a process of recreating a new, modified graph by taking the old graph from the database. I should note that while I can generate some of the boiler parts from template haskell in advance I'm ultimately using a stage 1 compiler with no GHCI o template haskell. Thanks, Casey -- Casey James Basichis Composer - Cartoon Network http://www.caseyjamesbasichis.com caseybasic...@gmail.com 310.387.7540 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] The end of an era, and the dawn of a new one
On 06/12/2012, at 3:56 , Simon Peyton-Jones wrote: Particularly valuable are offers to take responsibility for a particular area (eg the LLVM code generator, or the FFI). I'm hoping that this sea change will prove to be quite empowering, with GHC becoming more and more a community project, more resilient with fewer single points of failure. The LLVM project has recently come to the same point. The codebase has become too large for Chris Lattner to keep track of it all, so they've moved to a formal Code Ownership model. People own particular directories of the code base, and the code owners are expected to review patches for those directories. The GHC project doesn't have a formal patch review process, I think because the people with commit access on d.h.o generally know who owns what. Up until last week I think it was SPJ owns the type checker and simplifier, and SM owns everything else. :-) At this stage, I think it would help if we followed the LLVM approach of having a formal CODE_OWNERS file in the root path of the repo explicitly listing the code owners. That way GHC HQ knows what's covered and what still needs a maintainer. The LLVM version is here [1]. Code owners would: 1) Be the go-to person when other developers have questions about that code. 2) Fix bugs in it that no-one else has claimed. 3) Generally keep the code tidy, documented and well-maintained. Simon: do you want a CODE_OWNERS file? If so then I can start it. I think it's better to have it directly in the repo than on the wiki, that way no-one that works on the code can miss it. I suppose I'm the default owner of the register allocators and non-LLVM native code generators. Ben. [1] http://llvm.org/viewvc/llvm-project/llvm/trunk/CODE_OWNERS.TXT?view=markup ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Are there REPA linear algebra routines? e.g. Eigenvalues?
On 06/12/2012, at 3:18 , KC wrote: :) Not apart from the matrix-matrix multiply code in repa-algorithms. If you wanted to write some I'd be happy to fold them into repa-algorithms. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?
Gershom Bazerman wrote: On 11/30/12 10:44 AM, Dan Doel wrote: Lists! The finite kind. This could mean Seq for instance. On Nov 30, 2012 9:53 AM, Brent Yorgey byor...@seas.upenn.edu mailto:byor...@seas.upenn.edu wrote: Any data type which admits structures of arbitrary but *only finite* size has a natural zippy Apply instance but no Applicative (since pure would have to be an infinite structure). The Map instance I mentioned above falls in this category. Though I guess I'm having trouble coming up with other examples, but I'm sure some exist. Maybe Edward knows of other examples. Another common case would be an embedded DSL representing code in a different language, targeting a different platform (or even an FPGA or the like), etc. You can apply `OtherLang (a - b)` to an `OtherLang a` and get an `OtherLang b`, but you clearly can't promote (or lower, I guess) an arbitrary Haskell function into a function in your target language. This is the same reason that GArrows remove the `arr` function (http://www.cs.berkeley.edu/~megacz/garrows/). A fine example! And I am getting the drift... yes, this could be a useful abstraction. Now, on to Bind: the standard finite structure example for Bind is most probably the substitution thingy, i.e. if m :: m a, f :: a - m b, then m = f means replace all elements x :: a in m with f x and then flatten the result so it's an m b again. Like concatMap for lists, right? So, there is no return for that in the Map case for exactly the same reason as with Apply: the unit would have have value id for every possible key, so cannot be finite. So what about an example for Bind\\Monad that is not yet another variation of the finite structure theme? Cheers -- Ben Franksen () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?
Brent Yorgey wrote: On Thu, Nov 29, 2012 at 03:52:58AM +0100, Ben Franksen wrote: Tony Morris wrote: As a side note, I think a direct superclass of Functor for Monad is not a good idea, just sayin' class Functor f where fmap :: (a - b) - f a - f b class Functor f = Apply f where (*) :: f (a - b) - f a - f b class Apply f = Bind f where (=) :: (a - f b) - f a - f b class Apply f = Applicative f where unit :: a - f a class (Applicative f, Bind f) = Monad f where Same goes for Comonad (e.g. [] has (=) but not counit) ... and again for Monoid, Category, I could go on... Hi Tony even though I dismissed your mentioning this on the Haskell' list, I do have to admit that the proposal has a certain elegance. However, before I buy into this scheme, I'd like to see some striking examples for types with natural (or at least useful) Apply and Bind instances that cannot be made Applicative resp. Monad. Try writing an Applicative instances for (Data.Map.Map k). It can't be done, but the Apply instance is (I would argue) both natural and useful. I see. So there is one example. Are there more? I'd like to get a feeling for the abstraction and this is hard if there is only a single example. Also, it is not clear to me what laws should hold for them. http://hackage.haskell.org/package/semigroupoids defines all of these and specifies laws, presumably derived in a principled way. Ok. I was not surprised to see that there are not many laws for the classes without unit. Cheers -- Ben Franksen () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] A big hurray for lambda-case (and all the other good stuff)
Hi Everyone just wanted to drop by to say how much I like the new lambda case extension. I use it all the time and I just *love* how it relieves me from conjuring up dummy variables, which makes teh code not only esier to write but also to read. A big, huge thank you to the ghc developers. This has been so long on my wish list. Also much appreciated and long awaited: tuple sections (though I use them not quite as often). Both should *definitely* go into Haskell'13. Of course, thank you also for all the other beautiful stuff in ghc-7.6.1, especially PolyKinds, DataKinds etc. GHC is just simply amazing. You guys RULE THE WORLD! Cheers -- Ben Franksen () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?
Tony Morris wrote: As a side note, I think a direct superclass of Functor for Monad is not a good idea, just sayin' class Functor f where fmap :: (a - b) - f a - f b class Functor f = Apply f where (*) :: f (a - b) - f a - f b class Apply f = Bind f where (=) :: (a - f b) - f a - f b class Apply f = Applicative f where unit :: a - f a class (Applicative f, Bind f) = Monad f where Same goes for Comonad (e.g. [] has (=) but not counit) ... and again for Monoid, Category, I could go on... Hi Tony even though I dismissed your mentioning this on the Haskell' list, I do have to admit that the proposal has a certain elegance. However, before I buy into this scheme, I'd like to see some striking examples for types with natural (or at least useful) Apply and Bind instances that cannot be made Applicative resp. Monad. Also, it is not clear to me what laws should hold for them. Cheers -- Ben Franksen () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Why Kleisli composition is not in the Monad signature?
Dan Doel wrote: On Tue, Oct 16, 2012 at 10:37 AM, AUGER Cédric sedri...@gmail.com wrote: join IS the most important from the categorical point of view. In a way it is natural to define 'bind' from 'join', but in Haskell, it is not always possible (see the Monad/Functor problem). As I said, from the mathematical point of view, join (often noted μ in category theory) is the (natural) transformation which with return (η that I may have erroneously written ε in some previous mail) defines a monad (and requires some additionnal law). This is the way it's typically presented. Can you demonstrate that it is the most important presentation? I'd urge caution in doing so, too. For instance, there is a paper, Monads Need Not Be Endofunctors, that describes a generalization of monads to monads relative to another functor. And there, bind is necessarily primary, because join isn't even well typed. I don't think it's written by mathematicians per se (rather, computer scientists/type theorists). But mathematicians have their own particular interests that affect the way they frame things, and that doesn't mean those ways are better for everyone. Right. Mathematical /conventions/ can and should be questioned. Sometimes they are not appropriate to the application domain. Sometimes the conventions are just stupid or obsolete even in a purely mathematical context (a well-known example is the extra syntax sugar for binomial coefficients, but there are worse ones), and you still find them in modern text books. Talk about backwards compatibility... My preference for Kleisli composition is because it makes the monad laws so much easier to write down and understand. Everywhere it is said that = must be associative and then the laws are written down for = and return and it is very hard to see what this lambda grave has to do with associativity or units. When I started with Haskell, this was all I could find. It was years later that I stumbled over a text that explained it with = and suddenly it all became simple and clear and I finally understood the monad laws! So, maybe = is the better primitive operation w.r.t. implementation, but IMO = is *much* more efficient w.r.t. understanding the monad laws. Since it is natural to explain the laws of a class using only class methods, I would prefer if = was added to the class with default implementations for = in terms of = and vice versa, so that you can still use = as the primitive operation when implementing an instance. Cheers -- Ben Franksen () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Reaching Max Bolingbroke
On 19/11/2012, at 24:40 , Roman Cheplyaka wrote: For the last two months I've been trying to reach Max Bolingbroke via his hotmail address, github and linkedin, but did not succeed. Does anyone know if he's well? If someone could help by telling him that I'd like to get in touch, I'd appreciate that. He wasn't at ICFP either. I think SPJ said he was in the middle of writing up his PhD thesis. When I was doing mine I was out of circulation for a good 3 months. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Taking over ghc-core
With Don Stewart's blessing (https://twitter.com/donsbot/status/267060717843279872), I'll be taking over maintainership of ghc-core, which hasn't been updated since 2010. I'll release a version with support for GHC 7.6 later today. Shachaf ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Taking over ghc-core
Shachaf Ben-Kiki shac...@gmail.com writes: With Don Stewart's blessing (https://twitter.com/donsbot/status/267060717843279872), I'll be taking over maintainership of ghc-core, which hasn't been updated since 2010. I'll release a version with support for GHC 7.6 later today. Thanks! I was needing ghc-core a few weeks ago. It'll be nice to see this project maintained again. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Serializing with alignment
Hi Everyone I want to implement a binary protocol that, unfortunately, has some alignment restrictions. In order to fulfill these, I need access to the current offset in the bytestring. The binary package does provide a function bytesRead :: Get Int64 but only for the Get monad; there is no equivalent for the Put monad. So my first question: is there a serialization library that offers something like bytesWritten :: PutM Int64 Failing that, would you think adding it to binary is a reasonable feature request? I have taken a cursory look at the implementation and it looks like this is not a matter of simply adding a missing function, but would probably need an addition to internal data structures. I could also try and wrap the PutM from binary with a StateT transformer and count the bytes myself. I will probably have to use at least a ReaderT wrapper anyway, since I have to pass the byte order as a parameter (byte order gets negotiated between client and server, it is not fixed). I was really hoping that there is some library that has built-in support for stateful serialization (alignment, byte-order, etc). Any pointers, hints, etc are much appreciated. Cheers -- Ben Franksen () ascii ribbon campaign - against html e-mail /\ www.asciiribbon.org - against proprietary attachments ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Deriving settings from command line, environment, and files
David Thomas davidleotho...@gmail.com writes: Is there a library that provides a near-complete solution for this? I looked around a bit and found many (many!) partial solutions on hackage, but nothing that really does it all. In coding it up for my own projects, however, I can't help but feel like I must be reinventing the wheel. What I want is something that will process command line options (this seems to be where most packages are targetted), environment variables, and settings files (possibly specified in options), and override/default appropriately. Did I miss something? Do you have an example of a library in another language that does what you are looking for? boost's program_options library can handle arguments read from a configuration file, but that's the closest example I can come up with. The design space for command line parsing libraries alone is pretty large; when one adds in configuration file and environment variable parsing, option overriding, and the like it's downright massive. Moreover, programs vary widely in their configuration requirements; it seems to me that it would be very difficult to hit a point in this space which would be usable for a sufficiently large number of programs to be worth the effort. This is just my two cents, however. If I were you, I'd look into building something on top of an existing option parsing library. I think optparse-applicative is particularly well suited to this since it nicely separates the definition of the options from the data structure used to contain them and doesn't rely on template haskell (unlike cmdargs). Composing this with configuration file and environment variable parsing seems like it shouldn't be too tough. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Adding custom events to eventlog
Janek S. fremenz...@poczta.onet.pl writes: Dear list, I'm using ThreadScope to improve performance of my parallel program. It would be very helpful for me if I could place custom things in eventlog (e.g. now function x begins). Is this possible? Yes, it certainly is possible. Have a look at Debug.Trace.traceEvent and traceEventIO. I have found these to be a remarkably powerful tool for understanding parallel performance. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Possible bug in Criterion or Statistics package
Aleksey Khudyakov alexey.sklad...@gmail.com writes: On 13.08.2012 19:43, Ryan Newton wrote: Terrible! Quite sorry that this seems to be a bug in the monad-par library. I'm copying some of the other monad-par authors and we hopefully can get to the bottom of this. If it's not possible to create a smaller reproducer, is it possible to share the original test that triggers this problem? In the meantime, it's good that you can at least run without parallelism. Here is slightly simplified original test case. By itself program is very small but there is statistics and criterion on top of the monad-par Failure occurs in the function Statistics.Resampling.Bootstrap.bootstrapBCA. However I couldn't trigger bug with mock data. Has there been any progress or an official bug report on this? Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is it worth adding Gaussian elimination and eigenvalues to REPA?
KC kc1...@gmail.com writes: I realize if one wants speed you probably want to use the hMatrix interface to GSL, BLAS and LAPACK. Worth it in the sense of have a purely functional implementation. I, for one, have needed these in the past and far prefer Repa's interface to that of hMatrix. I considered implementing these myself but I doubt that I could write an implementation worthy of using having relatively little knowledge of this flavor of numerics (stability is a pain, so I hear). Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] ANNOUNCE: tardis
On Tue, Aug 7, 2012 at 7:04 AM, Dan Burton danburton.em...@gmail.com wrote: As a side note, since the code base is relatively small, it can also serve as a simple demonstration of how to use a cabal flag in conjunction with CPP to selectively include swaths of code (see Control/Monad/Tardis.hs and tardis.cabal). Eep, your API changes based on compile-time settings. I think this is a bad idea, because other packages cannot depend on a flag, so realistically other packages cannot depend on the instances existing, so they're nearly useless. UndecidableInstances is excessively maligned and usually fine anyway. If it compiles, it won't go wrong. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Monads with The contexts?
On Thu, Jul 12, 2012 at 11:01 AM, Takayuki Muranushi muranu...@gmail.comwrote: sunPerMars :: [Double] sunPerMars = (/) $ sunMass * marsMass Sadly, this gives too many answers, and some of them are wrong because they assume different Earth mass in calculating Sun and Mars masses, which led to inconsistent calculation. I think what you want to do is factor out the Earth's mass, and do your division first: sunPerMars'' = (/) $ sunMassCoef * marsMassCoef The mass of the earth cancels. That gives a list of length 9, where your approach gave 16 distinct results. But I think that's just floating point rounding noise. Try the same monadic calculation with integers and ratios. The moral? Using numbers in a physics calculation should be your last resort ;) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage 2 maintainership
Krzysztof Skrzętnicki gte...@gmail.com writes: Hi, Are there any news how things are going? Things have been pretty stagnant yet again. I was more than a bit over-optimistic concerning the amount of time I'd have available to put into this project. Moreover, the tasks required to get Hackage into a usable state weren't nearly as clear as I originally thought. Unfortunately, those who have historically been very active in Hackage development and would have been responsible for much of the recent work in advancing Hackage 2 to where it is now have other demands on their time. My understanding is there is a fair amount of half-completed code floating around. What remains there to be done to get us to Hackage 2? I found this list of tickets: https://github.com/haskell/cabal/issues?labels=hackage2page=1state=open Is there anything more to be done? This list is definitely a start. One of the issues that was also realized is the size of the server's memory footprint. Unfortunately acid-state's requirement that all data either be in memory or have no ACID guarantees was found to be a bit of a limitation. If I recall correctly some folks were playing around with restructuring the data structures a bit to reduce memory usage. I really don't know what happened to these efforts. On the other hand, it seems that the test server is still ticking away at http://hackage.factisresearch.com/ with an up-to-date package index, so things are looking alright on that front. At this point, it seems that we are in a situation yet again where someone just needs to start applying gentle pressure and figure out where we stand. I'm afraid especially now I simply don't have time to take on this project in any serious capacity. Perhaps one way forward would be to propose the project again as a GSoC project next year. That being said, there is no guarantee that someone would step up to finish it. Just my two cents. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] How do people still not understand what FP is about? What are we doing wrong?
Saw this float by in twitter, and it made me a bit sad. Obviously this is still a large misunderstanding of FP in the larger programming community and it make me wonder what we FP enthusiasts are doing wrong to not get the message out to people. Programming languages that require random senseless voodoo to get an effect are awesome. Let's make programming hard through poor design. [1] The sad thing about this is that the inverse of this has more truth to it; that languages that allow people to intersperse side effects anywhere in their computation without thought are flawed by design and allow programmers to do stupid things that hinder the composability, thread safety and ability to reason of/about their code. Has anyone had any experience / success with convincing people that the senseless voodoo is actually a boon rather than a bane? Is it even worth trying to convince people so set in their ways? ( Sorry if this is even too off-topic for the cafe. Just needed a place to vent my frustration at this. ) Cheers, Ben [1] https://twitter.com/cwestin63/status/214793627170390018 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How do people still not understand what FP is about? What are we doing wrong?
just to add to the ridiculousness quotient of this conversation http://web.archive.org/web/20080406183542/http://www.lisperati.com/landoflisp/panel01.html (i don't know where to find this other than in the web archive.) ben On Jun 18, 2012, at 1:44 PM, Christopher Done wrote: On 18 June 2012 22:28, Ertugrul Söylemez e...@ertes.de wrote: You just have to live with the fact that there will always be a small percentage of retarded people. It's best to just ignore them. Well, they're not stupid. Just very stubborn. Like most programmers. Stupid people can be taught to be smarter, stubborn people don't want to be taught. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How do people still not understand what FP is about? What are we doing wrong?
Agreed, definitely out of context now that he has inadvertently cleared that up since this post. That thing that they say about jumping to assumptions … definitely well proven and in force today. Shouldn't be posting to mailing lists so early in the morning. :-/ To be clear though, this post wasn't about calling anyone stupid. Chris most certainly isn't. Calling people stupid just because they disagree with you is a pretty awful thing and doesn't convince anyone anything. Maybe it was poorly worded, but I was more after ways to educate people why things are the way they are in haskell land and what powers that brings. The person is still able to disagree with that and prefer the old ways, but at least their decision wasn't fuelled by ignorance; which is the most important thing. But yeah; this is all out of context so all of the above is fairly moot, anyhow. Apologies for the wild early-morning assumptions, everyone! Cheers, Ben On Tuesday, 19 June 2012 at 6:43 AM, john melesky wrote: On Tue, Jun 19, 2012 at 05:59:57AM +1000, Ben Kolera wrote: Saw this float by in twitter, and it made me a bit sad. Obviously this is still a large misunderstanding of FP in the larger programming community and it make me wonder what we FP enthusiasts are doing wrong to not get the message out to people. Programming languages that require random senseless voodoo to get an effect are awesome. Let's make programming hard through poor design. [1] If you click through and look at his later tweets, it's clear he's talking about Objective-C. Unless you're suggesting Objective-C is the language of FP enthusiasts, i think it's safe to say you heard him out of context. :) -john ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Installing pandoc / json with ghc 6.12.1
Am I doing something wrong here? Well, you're using ghc-6.12 ... :-) The most recent version of pandoc that Hackage claims to have built with ghc 6.12 looks to be 1.6. Rolling back that far eliminates the json dependency entirely, so I think it would solve your issue. Or you could use the Pandoc in Debian stable, which appears to be 1.5.1. A more recent ghc would probably also work, of course, but I imagine you're trying the Debian stable version for a reason. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Haskell (GHC 7) on ARM
Joshua Poehls jos...@poehls.me writes: Hello Ben, Hello, Sorry for the latency. I'm currently on vacation in Germany so I haven't had terribly consistent Internet access. I've Cc'd haskell-cafe@ as I've been meaning to document my experiences anyways and your email seems like a good excuse to do this. I just got a Raspberry Pi and I'm interested in running Haskell on it. So far I'm running Arch Linux ARM and I noticed there is is a GHC package available, but it is version 6.12.3. Doing research online I saw that you did some work on adding GHCi support for ARM in GHC 7.4. I was wondering if you had anything to share related to running GHC/GHCI 7.4 on ARM, specifically Arch Linux ARM. Indeed my ARM linker patch was merged into GHC 7.4.2. This means that ARM is now a fully supported target platform for GHC, complete with GHCi support (which is pretty much necessary for many ubuiquitous packages). My ARM development environment consists of a PandaBoard (and before this a BeagleBoard) running Linaro's Ubuntu distribution. For this reason, I'm afraid I won't be of terribly much help in the case of Arch. That being said, GHC has a wonderful build system which has never given me trouble (except in cases of my own stupidity). Just installing the required development packages and following the build instructions on Wiki[1] is generally quite sufficient. I've also have some notes of my own[2] where I've collected my own experiences. As you likely know, GHC doesn't have a native ARM code generator of its own, instead relying on LLVM for low-level code generation. For this reason, you'll first need to have a working LLVM build. I have found that the ARM backend in the 3.0 release is a bit buggy and therefore generally pull directly from git (HEAD on my development box is currently sitting at 7750ff1e3cbb which seems to be stable). That being said, I haven't done much work in this space recently and it's quite likely that the 3.1 release is better in this respect. Compiling LLVM is one case where I have found the limited memory of my BeagleBoard (which is I believe is similar to the Raspberry Pi) to be quite troublesome. There are cases during linking where over a gigabyte of memory is necessary. At this stage the machine can sit for hours thrashing against its poor SD card making little progress. For this reason, I would strongly recommend that you cross-compile LLVM. The process is pretty straightforward and I have collected some notes on the matter[3]. The size of GHC means that in principle one would prefer to cross-compile. Unfortunately, support[4] for this is very new (and perhaps not even working/complete[5]?) so we have to make due with what we have. The build process itself is quite simple. You will want to be very careful to ensure that the ghc/libraries/ trees are sitting on the same branch as your ghc/ tree (with the ghc/sync-all script). Inconsistency here can lead to very ambiguous build failures. Otherwise, the build is nearly foolproof (insofar as this word can describe any build process). The build takes the better part of a day on my dual-core Pandaboard. I can't recall what the maximum memory consumption was but I think it might fit in 512MB. Lastly, if I recall correctly, you will find that you'll need to first build GHC 7.0 bootstrapped from 6.12 before you can build 7.4 (which requires a fairly new stage 0 GHC). Let the list know if you encounter any issues. I'll try to dust off my own development environment once I get back to the states next week to ensure that everything still works. I've been meaning to setup the PandaBoard as a build slave as Karel's has been failing for some time now (perhaps you could look into this, Karel?). Moreover, perhaps I can find a way to produce redistributable binaries to help others get started. Cheers, - Ben [1] http://hackage.haskell.org/trac/ghc/wiki/Building [2] http://bgamari.github.com/posts/ghc-llvm-arm.html [3] http://bgamari.github.com/posts/cross-compiling_llvm.html [4] http://hackage.haskell.org/trac/ghc/wiki/CrossCompilation [5] http://www.haskell.org/pipermail/cvs-ghc/2012-February/070791.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is Repa suitable for boxed arrays?...
On 03/06/2012, at 18:10 , Stuart Hungerford wrote: I need to construct a 2D array of a Haskell data type (boxed ?) where each array entry value depends on values earlier in the same array (i.e. for entries in previous row/column indexes). It should work. Use the V type-index for boxed arrays [1], so your array type will be something like (Array V DIM2 Float) If you can't figure it out then send me a small list program showing what you want to do. Repa (V3.1.4.2) looks very powerful and flexible but it's still not clear to me that it will work with arbitrary values as I haven't been able to get any of the Wiki tutorial array creation examples to work (this is with Haskell platform 2012.2 pre-release for OS/X). The wiki tutorial is old. It was written for the Repa 2 series, but Repa 3 is different. However I just (just) submitted a paper on Repa 3 to Haskell Symposium, which might help [2] [1] http://hackage.haskell.org/packages/archive/repa/3.1.4.2/doc/html/Data-Array-Repa-Repr-Vector.html [2] http://www.cse.unsw.edu.au/~benl/papers/guiding/guiding-Haskell2012-sub.pdf Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] mapping a concept to a type
I wonder if you want a typeclass here, rather than a type? A Normal Rule is pretty much a State Transformer, while a Meta Rule seems like a higher-order function on Normal Rules[*]. These are different kinds of things --- and I say kind advisedly --- so perhaps better to define the specific commonalities you need than to try to shoehorn them both into one type. [*]: Possibly related question: Can a Meta Rule depend upon an implementation detail of a Normal rule? In other words, does rule1 g == rule2 g imply myMetaRule rule1 == myMetaRule rule2 ? ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Can Haskell outperform C++?
Kevin Charter kchar...@gmail.com writes: snip For example, imagine you're new to the language, and as an exercise decide to write a program that counts the characters on standard input and writes the count to standard output. A naive program in, say, Python will probably use constant space and be fairly fast. A naive program in Haskell stands a good chance of having a space leak, building a long chain of thunks that isn't forced until it needs to write the final answer. On small inputs, you won't notice. The nasty surprise comes when your co-worker says cool, let's run it on this 100 MB log file! and your program dies a horrible death. If your friend is a sceptic, she'll arch an eyebrow and secretly think your program -- and Haskell -- are a bit lame. I, for one, can say that my initial impressions of Haskell were quite scarred by exactly this issue. Being in experimental physics, I often find myself writing data analysis code doing relatively simple manipulations on large(ish) data sets. My first attempt at tackling a problem in Haskell took nearly three days to get running with reasonable performance. I can only thank the wonderful folks in #haskell profusely for all of their help getting through that period. That being said, it was quite frustrating at the time and I often wondered how I could tackle a reasonably large project if I couldn't solve even a simple problem with halfway decent performance. If it weren't for #haskell, I probably would have given up on Haskell right then and there, much to my deficit. While things have gotten easier, even now, nearly a year after writing my first line of Haskell, I still occassionally find a performance issue^H^H^H^H surprise. I'm honestly not really sure what technical measures could be taken to ease this learning curve. The amazing community helps quite a bit but I feel that this may not be a sustainable approach if Haskell gains greater acceptance. The profiler is certainly useful (and much better with GHC 7.4), but there are still cases where the performance hit incurred by the profiling instrumentation precludes this route of investigation without time consuming guessing at how to pare down my test case. It's certainly not an easy problem. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Can Haskell outperform C++?
Yves Parès yves.pa...@gmail.com writes: The profiler is certainly useful (and much better with GHC 7.4) What are the improvements in that matter? (I just noticed that some GHC flags wrt profiling have been renamed) The executive summary can be found in the release notes[1]. There was also a talk I remember watching a while ago which gave a pretty nice overview. I can't recall, but I might have been this[2]. Lastly, profiling now works with multiple capabilities[3]. Cheers, - Ben [1] http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/release-7-4-1.html [2] http://www.youtube.com/watch?v=QBFtnkb2Erg [3] https://plus.google.com/107890464054636586545/posts/hdJAVufhKrD ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Correspondence between libraries and modules
Alvaro Gutierrez wrote: I've only dabbled in Haskell, so please excuse my ignorance: why isn't there a 1-to-1 mapping between libraries and modules? As I understand it, a library can provide any number of unrelated modules, and conversely, a single module could be provided by more than one library. I can see how this affords library authors more flexibility, but at a cost: there is no longer a single, unified view of the library universe. (The alternative would be for every module to be its own, hermetic library.) So I'm very interested in the rationale behind that aspect of the library system. I am probably repeating arguments brought forward by others, but I really like that the Haskell module name space is ordered along functionality rather than authorship. If I ever manage to complete an implementaton of the EPICS pvData project in Haskell, it will certainly inherit the Java module naming convention and thus will contain modules named Org.Epics.PvData.XXX, *but* if I need to add utility functions to the API that are generic list processing functions they will certainly live in the Data.List.* name space and if I need to add type level stuff (which is likely) it will be published under Data.Type.* etc. This strikes me as promoting re-use: makes it far easier and more likely to factor out these things into a separate general purpose library or maybe even integrate them into a widely known standard library. It also gives you a much better idea what the thing you export is doing than if it is from, say, Org.Epics.PvData.Util. Finally, it gives the package author an incentive to actually do the refactoring that makes it obvious where the function belongs to, functionally. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] heterogeneous environment
dear static typing masters -- while working on an STM-like library, i ran into the issue of trying to create a transaction log for reads / writes of heterogeneous reference types. i don't have a good answer to this problem. the problem is twofold : first, the general heterogeneous collection problem, and second, connecting a reference to it's log. there are some solutions i've tried but each has drawbacks : - store the transaction log inside of the reference itself. essentially, each reference has an IORef (Map ThreadId a) associated to it. this is the approach used by [1]. unfortunately this creates a point of concurrency contention at each reference for each read / write, and a lot of repeated computation (the transaction log is spread out in a lot of pieces.) - use Data.Unique to identify Refs, and use existential quantification or Data.Dynamic to create a heterogenous Map from uid to log. for example, to represent a log of compare-and-swaps we might do something like data Ref a = Ref (IORef a) Unique data OpaqueCAS = forall a . OpaqueCAS { casRef :: Ref a, oldVal :: a, newVal :: a } type CASLog = Map Unique OpaqueCAS logCAS r@(Ref _ uid) o n log = insert uid (OpaqueCAS r o n) log... but what if the transaction wants to perform a second CAS on a reference we've already CAS'd? then we should create a combined OpaqueCAS record which expects the first oldVal and swaps in the second newVal. unfortunately the type checker balks at this, as it can't prove that the type variable 'a from the first record is the same as the 'a in the new record; i suppose there might be some fancy type shenanigans which might solve this... Data.Dynamic works but puts a Typeable constraint on the Ref type, so requires the user to modify their data types, and requires a run-time type check (and while performance isn't paramount now it will become important to me later.) also it doesn't feel right... - tupling and functional representations. a monadic function that does a read on an reference can be thought of as a pure function with an extra argument. a monadic function that does a write can be thought of as a pure function with an extra return value. combining all the reads and writes into a transaction log is a big network / switchboard connecting problem, e.g. creating the extra inputs / outputs to the various functions and then stitching them together. running the monad then just is connecting up the final composed function to the actual input / output references. in a sense this amounts to tupling (or currying) the heterogeneous types. (is this is a kind of final representation, in the finally tagless sense?) i looked at this there were two problems : 1 - the representation is very difficult to manipulate, at least the way i was trying (using the arrow operations); the switchboard problem is extremely verbose, and 2 - it is hard to reconcile identity and position in the tuples -- the repeated read / write problem again. i also experimented with HList but got stuck on similar issues. it strikes me this is just the problem of keeping an environment for an interpreter of a language with mutable heterogeneous reference types. this must be a problem that has either been solved or else there is a haskell point of view on it i'm not grasping which avoids the need for this data structure. maybe there is a way of writing this as an interpreter or using some existing monad, like the ST monad? best, ben [1] - Frank Huch and Frank Kupke, A High-Level Implementation of Composable Memory Transactions in Concurrent Haskell ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] heterogeneous environment
thanks oleg and heinrich for the solutions; i'll definitely take a look at the vault package. i'll probably end up using unsafeCoerce though, it's too tempting (shame on me.) On May 2, 2012, at 2:33 AM, o...@okmij.org wrote: It seems you actually prefer this solution, if it worked. This solution will entail some run-time check one way or another, because we `erase' types when we store them in the log and we have to recover them later. actually i don't prefer this solution. i'm interested to hear if there are other solutions (or ways to avoid the problem entirely.) i don't know what to search for, something like interpreters with environments with heterogeneous types. i find the circuit-diagram / functional representation the most interesting, but it seems unfortunately syntactically impossible. best, ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)
heinrich and all -- thanks for the illuminating comments, as usual. i've had a little bit of time to play around with this and here's what i've concluded (hopefully i'm not mistaken.) 1 - while composeability makes STM a great silver bullet, there are other composable lower level paradigms. aaron has identified a few fundamental algorithms that appear to be composable, and used a lot in concurrent algorithms / data structures : k-CAS, exponential backoff, helping and elimination. 2 - k-CAS (and more generally k-RMW) is composable. i'm not exactly sure if i'd call k-CAS monadic but it is at least applicative (i'm not sure what the exact definition of k-CAS is. a bunch of 1-CASs done atomically seems applicative; allowing them to interact seems monadic. certainly k-RMW is monadic.) while it is possible to implement STM on top of k-CAS and vice versa, k-CAS can get you closer to the metal, and if you can special case 1-CAS to hardware you will win on a lot of benchmarks. a lot of concurrent algorithms only need 1-CAS. 3 - backoff, elimination and helping help scalability a lot, so you want support for them. elimination and helping require communication between transactions, whereas STM is all about isolation, so reagents are fundamentally different in this regard. reagents as implemented in the paper are not entirely monadic (by accident, i think the author intended them to be.) as far as i can see he uses an applicative form of k-CAS, and the reagents on top of it are applicative : his computed combinator (monadic bind) does not allow post-composition (it has no continuation.) there is no reason why it could not be built on top of a monadic k-RMW and be fully monadic. however he is able to recognize and special-case 1-CAS, which gives great performance of course. however, this does bring up a general question : why are applicative functors (often) faster than monads? malcolm wallace mentioned this is true for polyparse, and heinrich mentioned this is true more generally. is there a yoga by which one can write monadic functors which have a specialized, faster applicative implementation? right now i'm reading up on k-CAS / k-RMW implementations, and i think i'm going to start writing that monad before moving on to elimination / helping. i'm finding a two things difficult : - it is hard to represent a unified transaction log because of heterogeneous types, and - allowing a fully monadic interface makes it hard for me to special case 1-CAS (or applicative k-CAS.) there are workarounds for the first issue (separate logs for each reference, or a global clock as in Transactional Locking II.) for the second, right now i'm wondering if i'm going to have to write a compiler for a little DSL; i'd like to be able to exploit applicative performance gains generally, and special case 1-CAS. best, ben On Apr 6, 2012, at 5:38 AM, Heinrich Apfelmus wrote: Ben wrote: perhaps it is too late to suggest things for GSOC -- but stephen tetley on a different thread pointed at aaron turon's work, which there's a very interesting new concurrency framework he calls reagents which seems to give the best of all worlds : it is declarative and compositional like STM, but gives performance akin to hand-coded lock-free data structures. he seems to have straddled the duality of isolation vs message-passing nicely, and can subsume things like actors and the join calculus. http://www.ccs.neu.edu/home/turon/reagents.pdf he has a BSD licensed library in scala at https://github.com/aturon/ChemistrySet if someone doesn't want to pick this up for GSOC i might have a hand at implementing it myself. That looks great! While I didn't take the time to understand the concurrency model in detail, the overall idea is to use arrows that can be run atomically runAtomically :: Reagent a b - (a - IO b) This is very similar to STM: combining computations within the monad/arrow is atomic while combining computations outside the monad/arrow can interleave them. runAtomically (f . g) -- atomic runAtomically f . runAtomically g -- interleaving Actually, it turns out that the Reagent arrow is also a monad, but the author seems to claim that the static arrow style enables certain optimizations. I haven't checked his model in detail to see whether this is really the case and how exactly it differs from STM, but we know that situations like this happen for parser combinators. Maybe it's enough to recast reagents as an applicative functor? To summarize: the way I understand it is that it's apparently possible to improve the STM monad by turning it into an arrow. (I refer to STM in a very liberal sense here: whether memory is transactional or not is unimportant, the only thing that matters is a computation that composes atomically.) Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com
Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)
i'm not sure what your email is pointing at. if it is unclear, i understand the difference between applicative and monadic. i suppose the easy answer to why applicative can be faster than monadic is that you can give a more specialized instance declaration. i was just wondering if there was a way to make a monad recognize when it is being used applicatively, but that is probably hard in general. b On Apr 20, 2012, at 2:54 PM, KC wrote: Think of the differences (and similarities) of Applicative Functors and Monads and the extra context that monads carry around. -- -- Regards, KC ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] why are applicative functors (often) faster than monads? (WAS Google Summer of Code - Lock-free data structures)
the sequencing matters for applicative functors. from McBride and Patterson [1]: The idea is that 'pure' embeds pure computations into the pure fragment of an effectful world -- the resulting computations may thus be shunted around freely, as long as the order of the genuinely effectful computations is preserved. it is interesting to note that sequencing only matters a little for k-CAS : you just have to read before you write, but you can do the reads and writes in any order (as long as it is ultimately atomic.) b [1] McBride C, Patterson R. Applicative programming with effects Journal of Functional Programming 18:1 (2008), pages 1-13. On Apr 20, 2012, at 4:41 PM, KC wrote: Sorry, I thought you or someone was asking why are Applicative Functors faster in general than Monads. Functional programming is structured function calling to achieve a result where the functions can be evaluated in an unspecified order; I thought Applicative Functors had the same unspecified evaluation order; whereas, Monads could carry some sequencing of computations which has the extra overhead of continuation passing. Do I have that correct? On Fri, Apr 20, 2012 at 4:05 PM, Ben midfi...@gmail.com wrote: i'm not sure what your email is pointing at. if it is unclear, i understand the difference between applicative and monadic. i suppose the easy answer to why applicative can be faster than monadic is that you can give a more specialized instance declaration. i was just wondering if there was a way to make a monad recognize when it is being used applicatively, but that is probably hard in general. b On Apr 20, 2012, at 2:54 PM, KC wrote: Think of the differences (and similarities) of Applicative Functors and Monads and the extra context that monads carry around. -- -- Regards, KC -- -- Regards, KC ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.1.0
Paolo Capriotti wrote: On Mon, Apr 16, 2012 at 10:13 PM, Ben Franksen ben.frank...@online.de wrote: (1) What is the reason for the asymmetry in type Producer b m = Pipe () b m type Consumer a m = Pipe a Void m i.e. why does Producer use () for the input? I would expect it to use Void, like Consumer does for its output. Calling await in a Producer resulting in an immediate 'return ()' as you say is allowed (in the tutorial) strikes me as not very useful. The underlying reason for the asymmetry is the fact that '()' is a terminal object in the category of haskell types and *total* functions, while 'Void' is an initial object. Here's a property that uniquely determines the definitions of 'Producer' above. Let 'X' be the type such that 'Producer b m = Pipe X b m'. For all producers 'p' there should be a unique (total) pipe 'alpha :: forall a r. Pipe a X m r' such that 'alpha + p' and 'p' are observationally equal. In other words, since a producer never uses values of its input type 'a', there should be a unique way to make it into a pipe which is polymorphic in 'a'. It's easy to see that this property immediately implies that 'X' should be a terminal object, i.e. '()', and 'alpha' is therefore 'pipe (const ())'. Dually, you obtain that 'Consumer a m' is necessarily 'Pipe a Void m', and 'alpha = pipe absurd'. Ok, thanks for the explanation. Makes sense... (2) The $$ operator is poorly named. I would intuitively expect an operator that looks so similar to the standard $ to have the same direction of data flow (i.e. right to left, like function application and composition) but your is left to right. You could use e.g. $ instead, which has the additional advantage of allowing a symmetric variant for the other direction i.e. $. '$$' is inspired by iteratees. Similarly to its iteratee counterpart, it discards upstream result values and only returns the output of the last pipe. That said, '$' looks like a clearer alternative, so I could consider changing it. (...or maybe use a plain function instead of an operator...) Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: pipes-core 0.1.0
Paolo Capriotti wrote: I'm pleased to announce the release of version 0.1.0 of pipes-core, a library for efficient, safe and compositional IO, similar in scope to iteratee and conduits. http://hackage.haskell.org/package/pipes-core I like your pipes package. This is very similar to what Mario Blažević wrote about his Coroutines in the Monad.Reader (can't remember which issue; great article, BTW, many thanks Mario for making me understand the iteratee business (and also generators) for the first time). Your pipes-core looks even simpler to use, maybe due to avoiding to make a type distinction between consumer/producer/pipe (except the natural one i.e. through the input/output types), even though the parameterization by a functor (as in Monad.Coroutine) has its own beauty. Two issues: (1) What is the reason for the asymmetry in type Producer b m = Pipe () b m type Consumer a m = Pipe a Void m i.e. why does Producer use () for the input? I would expect it to use Void, like Consumer does for its output. Calling await in a Producer resulting in an immediate 'return ()' as you say is allowed (in the tutorial) strikes me as not very useful. If the idea is simply to flag nonsense like consumer + producer with a type error, then it might be a better idea to introduce two different Void types: data NoOutput data NoInput type Producer b m = Pipe NoInput b m type Consumer a m = Pipe a NoOutput m type Pipeline m = Pipe NoInput NoOutput m (and isn't this nicely self-explaining?) (2) The $$ operator is poorly named. I would intuitively expect an operator that looks so similar to the standard $ to have the same direction of data flow (i.e. right to left, like function application and composition) but your is left to right. You could use e.g. $ instead, which has the additional advantage of allowing a symmetric variant for the other direction i.e. $. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] ANNOUNCE: notcpp-0.0.1
On Sun, Apr 15, 2012 at 7:14 PM, Steffen Schuldenzucker sschuldenzuc...@uni-bonn.de wrote: On 04/13/2012 10:49 PM, Ben Millwood wrote: I'm pleased to announce my first genuinely original Hackage package: notcpp-0.0.1! http://hackage.haskell.org/package/notcpp [...] Why is it scopeLookup :: String - Q Exp with n bound to x :: T = @scopeLookup n@ evaluates to an Exp containing @Just x@ , not scopeLookup :: String - Q (Maybe Exp) with n bound to x :: T = @scopeLookup n@ evaluates to Just (an Exp containing @x@) ? Shouldn't n's being in scope be a compile time decision? That would also make the openState: runtime name resolution has its drawbacks :/[1] a compile time error. -- Steffen [1] http://hackage.haskell.org/packages/archive/notcpp/0.0.1/doc/html/NotCPP-ScopeLookup.html This way minimises the amount the user has to know about Template Haskell, because the user can just splice the expression immediately and then operate on it as an ordinary value. The design you suggest would require messing about in the Q monad to construct the expression you wanted based on whether you got a Nothing or a Just, which in my view is more awkward. I can see how your version would be useful too, though – in particular I can move the error call to a report call, which throws a compile-time error as you say. I'd be happy to expose both next version ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GHCi runtime linker: fatal error (was Installing REPA)
On 08/04/2012, at 2:41 AM, Dominic Steinitz wrote: Hi Ben, Chris and Others, Thanks for your replies and suggestions. All I want to do is invert (well solve actually) a tridiagonal matrix so upgrading ghc from the version that comes with the platform seems a bit overkill. I think I will go with Chris' suggestion for now and maybe upgrade ghc (and REPA) when I am feeling braver. Dominic. Sadly I now get this when trying to mulitply two matrices. Is this because I have two copies of Primitive? I thought Cabal was supposed to protect me from this sort of occurrence. Does anyone have any suggestions on how to solve this? You'll need to upgrade. Trying to support old versions of software is a lost cause. I pushed Repa 3.1 to Hackage on the weekend. It has a *much* cleaner API. I can't recommend continuing to use Repa 2. You will just run into all the problems that are now fixed in Repa 3. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Installing REPA
On 07/04/2012, at 9:33 AM, Chris Wong wrote: On Sat, Apr 7, 2012 at 2:02 AM, Dominic Steinitz idontgetoutm...@googlemail.com wrote: Hi, I'm trying to install REPA but getting the following. Do I just install base? Or is it more complicated than that? Thanks, Dominic. I think the easiest solution is to just use an older version of Repa. According to Hackage, the latest one that works with base 4.3 is Repa 2.1.1.3: $ cabal install repa==2.1.1.3 I've just pushed Repa 3 onto Hackage, which has a much better API than the older versions, and solves several code fusion problems. However, you'll need to upgrade to GHC 7.4 to use it. GHC 7.0.3 is two major releases behind the current version. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Installing REPA
On 07/04/2012, at 21:38 , Peter Simons wrote: Hi Ben, I've just pushed Repa 3 onto Hackage, which has a much better API than the older versions, and solves several code fusion problems. when using the latest version of REPA with GHC 7.4.1, I have trouble building the repa-examples package: | Building repa-examples-3.0.0.1... | Preprocessing executable 'repa-volume' for repa-examples-3.0.0.1... When I attempt to use repa 3.1.x, the build won't even get past the configure stage, because Cabal refuses these dependencies. Is that a known problem, or am I doing something wrong? It is a conjunction of tedious Cabal and Hackage limitations, as well as my failure to actually upload the new repa-examples package. Please try again now, and if that doesn't work email be the output of: $ cabal update $ cabal install repa-examples $ ghc-pkg list Thanks, Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Google Summer of Code - Lock-free data structures
perhaps it is too late to suggest things for GSOC -- but stephen tetley on a different thread pointed at aaron turon's work, which there's a very interesting new concurrency framework he calls reagents which seems to give the best of all worlds : it is declarative and compositional like STM, but gives performance akin to hand-coded lock-free data structures. he seems to have straddled the duality of isolation vs message-passing nicely, and can subsume things like actors and the join calculus. http://www.ccs.neu.edu/home/turon/reagents.pdf he has a BSD licensed library in scala at https://github.com/aturon/ChemistrySet if someone doesn't want to pick this up for GSOC i might have a hand at implementing it myself. b On Mar 29, 2012, at 6:46 AM, Tim Harris (RESEARCH) wrote: Hi, Somewhat related to this... Next month we have a paper coming up at EuroSys about a middle-ground between using STM and programming directly with CAS: http://research.microsoft.com/en-us/um/people/tharris/papers/2012-eurosys.pdf This was done in the context of shared memory data structures in C/C++, rather than Haskell. It might be interesting to see how the results carry over to Haskell, e.g. adding short forms of specialized transactions that interact correctly with normal STM-Haskell transactions. In the paper we have some examples of using short specialized transactions for the fast paths in data structures, while keeping the full STM available as a fall-back for expressing the cases that cannot be implemented using short transactions. --Tim -Original Message- From: haskell-cafe-boun...@haskell.org [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Heinrich Apfelmus Sent: 29 March 2012 13:30 To: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] Google Summer of Code - Lock-free data structures Florian Hartwig wrote: Heinrich Apfelmus wrote: So while the two are related, CAS is a machine primitive that works for a single operation and on a single word while STM is a software abstraction that isolates sequences of operations on multiple memory locations from each other. Is it possible to implement every data structure based on CAS in terms of STM? What are the drawbacks? The other way round? I think so. Atomically reading and writing a single memory location (which CAS does) is just a very simple transaction. But using a CAS instruction should be more efficient, since STM has to maintain a transaction log and commit transactions, which creates some overhead. Ah, I see. In that case, it may be worthwhile to implement the CAS instruction in terms of STM as well and measure the performance difference this makes for the final data structure. After all, STM is a lot more compositional than CAS, so I'd like to know whether the loss of expressiveness is worth the gain in performance. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Google Summer of Code - Lock-free data structures
Ben midfi...@gmail.com writes: perhaps it is too late to suggest things for GSOC -- but stephen tetley on a different thread pointed at aaron turon's work, which there's a very interesting new concurrency framework he calls reagents which seems to give the best of all worlds : it is declarative and compositional like STM, but gives performance akin to hand-coded lock-free data structures. he seems to have straddled the duality of isolation vs message-passing nicely, and can subsume things like actors and the join calculus. http://www.ccs.neu.edu/home/turon/reagents.pdf he has a BSD licensed library in scala at https://github.com/aturon/ChemistrySet if someone doesn't want to pick this up for GSOC i might have a hand at implementing it myself. Keep use in the loop if you do. I have a very nice application that has been needing a nicer approach to concurrency than IORefs but really can't afford STM. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Mathematics and Statistics libraries
I am a student currently interested in participating in Google Summer of Code. I have a strong interest in Haskell, and a semester's worth of coding experience in the language. I am a mathematics and cs double major with only a semester left and I am looking for information regarding what the community is lacking as far as mathematics and statistics libraries are concerned. If there is enough interest I would like to put together a project with this. I understand that such libraries are probably low priority, but if anyone has anything I would love to hear it. Thanks for reading, -Benjamin ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] using mutable data structures in pure functions
I'm sure others will want to chime in here, but I'll offer my two cents. On Sun, 11 Mar 2012 22:38:56 -0500, E R pc88m...@gmail.com wrote: snip So, again, what is the Haskell philosophy towards using mutable data structures in pure functions? Is it: 1. leave it to the compiler to find these kinds of opportunities 2. just use the immutable data structures - after all they are just as good (at least asymptotically) 3. you don't want to use mutable data structures because of _ 4. it does happen, and some examples are __ You will find that a surprising amount of the time this will be sufficient. After all, programmer time is frequently more expensive than processor time. That being said, there are some cases where you really do want to be able to utilize a mutable data structure inside of an otherwise pure algorithm. This is precisely the use of the ST monad. ST serves to allow the use of mutable state inside of a function while hiding the fact from the outside world. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage 2 maintainership
On Tue, 14 Feb 2012 02:59:27 -0800 (PST), Kirill Zaborsky qri...@gmail.com wrote: I apologize, But does hackage.haskell.org being down for some hours already has something with the process of bringing up Hackage 2? Nope, it will be some time before we are in a position to touch hackage.haskell.org. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage 2 maintainership
On Tue, 14 Feb 2012 02:06:16 +, Duncan Coutts duncan.cou...@googlemail.com wrote: On 14 February 2012 01:53, Duncan Coutts duncan.cou...@googlemail.com wrote: Hi Ben, snip Ah, here's the link to my last go at getting people to self-organise. http://www.haskell.org/pipermail/cabal-devel/2011-October/007803.html Excellent. I'll give it a read-through. You should find it somewhat useful. It gives an overview of people who are / have been involved. It seems the first task will be to identify exactly what needs to be done before we can begin the transition and record these tasks in a single place. I don't have a particularly strong opinion concerning where this should be (Trac, the Hackage wiki, the github issue tracker that's been mentioned), but we should consolidate everything we have in a single place. We did get another reasonable push at the time. In particular Max did a lot of good work. I'm not quite sure why it petered out again, I'd have to ask Max what went wrong, if it was my fault for letting things block on me or if it was just holidays/christmas. Maintaining momentum is hard. This is quite true. I'll try to keep a constant push. On another note, how did your full mirroring go last night? Cheers, - Ben P.S. Duncan, sorry for the duplicate message. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Hackage 2 maintainership
Hey all, Those of you who follow the Haskell subreddit no doubt saw today's post regarding the status of Hackage 2. As has been said many times in the past, the primary blocker at this point to the adoption of Hackage 2 appears to be the lack of an administrator. It seems to me this is a poor reason for this effort to be held up. Having taken a bit of time to consider, I would be willing to put in some effort to get things moving and would be willing to maintain the haskell.org Hackage 2.0 instance going forward if necessary. I currently have a running installation on my personal machine and things seem to be working as they should. On the whole, installation was quite trivial, so it seems likely that the project is indeed at a point where it can take real use (although a logout option in the web interface would make testing a bit easier). That being said, it would in my opinion be silly to proceed without fixing the Hackage trac. It was taken down earlier this year due to spamming[1] and it seems the recovery project has been orphaned. I would be willing to help with this effort, but it seems like the someone more familiar with the haskel.org infrastructure might be better equipped to handle the situation. It seems that this process will go something like this, 1) Bring Hackage trac back from the dead 2) Bring up a Hackage 2 instance along-side the existing hackage.haskell.org 3) Enlist testers 4) Let things simmer for a few weeks/months ensuring nothing explodes 5) After it's agreed that things are stable, eventually swap the Hackage 1 and 2 instances This will surely be a non-trivial process but I would be willing to move things forward. Cheers, - Ben [1] http://www.haskell.org/pipermail/cabal-devel/2012-January/008427.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Transactional memory going mainstream with Intel Haswell
http://arstechnica.com/business/news/2012/02/transactional-memory-going-mainstream-with-intel-haswell.ars would any haskell STM expert care to comment on the possibilities of hardware acceleration? best, ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Error in installing dph-examples on Mac OS X 10.7.3
On 10/02/2012, at 6:12 AM, mukesh tiwari wrote: Hello all I am trying to install dph-examples on Mac OS X version 10.7.3 but getting this error. I am using ghc-7.4.1. This probably isn't DPH specific. Can you compile a hello world program with -fllvm? Ben.___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Loading a texture in OpenGL
On 07/02/2012, at 7:00 AM, Clark Gaebel wrote: Using the OpenGL package on Hackage, how do I load a texture from an array? In the red book[1], I see their code using glGenTextures and glBindTexture, but I can't find these in the documentation. Are there different functions I should be calling? The Gloss graphics library has texture support, and the code for drawing them is confined to this module: http://code.ouroborus.net/gloss/gloss-head/gloss/Graphics/Gloss/Internals/Render/Picture.hs Feel free to steal the code from there. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Loading a texture in OpenGL
On 07/02/2012, at 2:40 PM, Clark Gaebel wrote: Awesome. Thanks! As a follow up question, how do I add a finalizer to a normal variable? OpenGL returns an integer handle to your texture in graphics memory, and you have to call deleteObjectNames on it. Is there any way to have this automatically run once we lose all references to this variable (and all copies)? I don't know. I've only used ForeignPtrs with finalisers before [1]. One problem with these finalisers is that GHC provides no guarantees on when they will be run. It might be just before the program exits, instead of when the pointer actually becomes unreachable. Because texture memory is a scarce resource, I wouldn't want to rely on a finaliser to free it -- though I suppose this depends on what you're doing. Ben. [1] http://www.haskell.org/ghc/docs/latest/html/libraries/haskell2010-1.1.0.1/Foreign-ForeignPtr.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Loading a texture in OpenGL
On 07/02/2012, at 2:50 PM, Clark Gaebel wrote: I would be running the GC manually at key points to make sure it gets cleaned up. Mainly, before any scene changes when basically everything gets thrown out anyways. From the docs: newForeignPtr :: FinalizerPtr a - Ptr a - IO (ForeignPtr a)Source Turns a plain memory reference into a foreign pointer, and associates a finalizer with the reference. The finalizer will be executed after the last reference to the foreign object is dropped. There is no guarantee of promptness, however the finalizer will be executed before the program exits. No guarantee of promptness. Even if the GC knows your pointer is unreachable, it might choose not to call the finaliser. I think people have been bitten by this before. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Compiling dph package with ghc-7.4.0.20111219
On 21/01/2012, at 22:47 , mukesh tiwari wrote: Hello all I have installed ghc-7.4.0.20111219 and this announcement says that The release candidate accidentally includes the random, primitive, vector and dph libraries. The final release will not include them. I tried to compile a program [ntro@localhost src]$ ghc-7.4.0.20111219 -c -Odph -fdph-par ParallelMat.hs ghc: unrecognised flags: -fdph-par Usage: For basic information, try the `--help' option. [ntro@localhost src]$ ghc-7.2.1 -c -Odph -fdph-par ParallelMat.hs The -fdph-par flag doesn't exist anymore, but we haven't had a chance to update the wiki yet. Use -package dph-lifted-vseg to select the backend. You could also look at the cabal file for the dph-examples package to see what flags we use when compiling. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] SMP parallelism increasing GC time dramatically
On Mon, 9 Jan 2012 18:22:57 +0100, Mikolaj Konarski mikolaj.konar...@gmail.com wrote: Tom, thank you very much for the ThreadScope feedback. Anything new? Anybody? We are close to a new release, so that's the last call for bug reports before the release. Stay tuned. :) As it turns out, I ran into a similar issue with a concurrent Gibbs sampling implmentation I've been working on. Increasing -H fixed the regression, as expected. I'd be happy to provide data if someone was interested. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?
On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote: * Alexander Solla alex.so...@gmail.com [2011-12-19 19:10:32-0800] * Documentation that discourages thinking about bottom as a 'value'. It's not a value, and that is what defines it. In denotational semantics, every well-formed term in the language must have a value. So, what is a value of fix id? There isn't one! Bottoms will be the null pointers of the 2010's, you watch. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?
On 20/12/2011, at 9:06 PM, Thiago Negri wrote: There isn't one! Bottoms will be the null pointers of the 2010's, you watch. How would you represent it then? Types probably. In C, the badness of null pointers is that when you inspect an int* you don't always find an int. Of course the superior Haskell solution is to use algebraic data types, and represent a possibly exceptional integer by Maybe Int. But then when you inspect a Maybe Int you don't always get an .. ah. Would it cause a compiler error? Depends whether you really wanted an Int or not. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?
In denotational semantics, every well-formed term in the language must have a value. So, what is a value of fix id? There isn't one! Bottoms will be the null pointers of the 2010's, you watch. This ×1000. Errors go in an error monad. Including all possible manifestations of infinite loops? Some would say that non-termination is a computational effect, and I can argue either way depending on the day of the week. Of course, the history books show that monads were invented *after* it was decided that Haskell would be a lazy language. Talk about selection bias. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?
On 20/12/2011, at 21:52 , Gregory Crosswhite wrote: Some would say that non-termination is a computational effect, and I can argue either way depending on the day of the week. *shrug* I figure that whether you call _|_ a value is like whether you accept the Axiom of Choice: it is a situational decision that depends on what you are trying to learn more about. I agree, but I'd like to have more control over my situation. Right now we have boxed and lifted Int, and unboxed and unlifted Int#, but not the boxed and unlifted version, which IMO is usually what you want. Of course, the history books show that monads were invented *after* it was decided that Haskell would be a lazy language. Talk about selection bias. True, but I am not quite sure how that is relevant to _|_... I meant to address the implicit question why doesn't Haskell use monads to describe non-termination already. The answer isn't necessarily because it's not a good idea, it's because that wasn't an option at the time. Dec 20, 2011, в 14:40, Jesse Schalken jesseschal...@gmail.com написал(а): Including all possible manifestations of infinite loops? So... this imaginary language of yours would be able to solve the halting problem? All type systems are incomplete. The idea is to do a termination analysis, and if the program can not be proved to terminate, then it is marked as possibly non-terminating. This isn't the same as deciding something is *definitely* non-terminating, which is what the halting problem is about. This possibly non-terminating approach is already used by Coq, Agda and other languages. Ben. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] indentation blues
I am fairly new to haskell, but I really like the emacs haskell mode. It is a bit strict but it generally does what I want it to. Unfortunately I can't really compare to the haskell vim mode since I only did Scala and Perl back when I was a heavy vim user. The one useful thing that I can add is that there are some really good packages out there for modal vi keybindings in emacs. If you truly like the vim keybindings better then you can still use them in emacs. I hope that helps a little bit. On Tue, Dec 13, 2011 at 10:50 AM, Martin DeMello martindeme...@gmail.com wrote: The vim autoindent for haskell is really bad :( Is there a better indent.hs file floating around somewhere? Alternatively, is the emacs haskell mode better enough that it's worth my time learning my way around emacs and evil? martin ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] lambda.fm How can I use this to help the Haskell community?
A while back I somehow managed to get the domain name, lambda.fm and I am simply creating this post to get some ideas from the community on what it could be used for to help the FP community. So tell me what you think. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Deduce problem.
Magicloud Magiclouds wrote: So I think I got what you guys meant, I limited ClassB to only H. Then how to archive my requirement, that from and to only return items that instanced ClassB? If you are willing to go beyond Haskell98 (or Haskell2010), you can use a multi-parameter class. Enable the extension: {-# LANGUAGE MultiParamTypeClasses #-} An then, instead of class (ClassA a) = ClassC a where from :: (ClassB b) = a - [b] to :: (ClassB c) = a - [c] you say class (ClassA a, ClassB b) = ClassC a b c where from :: c - [b] to :: c - [a] This means that for each triple of concrete types (a,b,c) that you wish to be an instance of ClassC, you must provide an instance declaration, e.g. instance ClassC Test H H where from = ...whatever... to = ...whatever... Now you have the fixed type H in the instance declaration and not a universally quantified type variable. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Interpreter with Cont
You'll probably get answers from people who are more proficient with this, but here's what I learned over the years. Tim Baumgartner wrote: Is Cont free as well? No. In fact, free monads are quite a special case, many monads are not free, e.g. the list monad. I believe what David Menendez said was meant to mean 'modulo some equivalence relation' i.e. you can define/implement many monads as 'quotients' of a free monad. But you cannot do this with Cont (though I am not able to give a proof). I guess so because I heard it's sometimes called the mother of all monads. It is, in the sense that you can implement all monads in terms of Cont. Cheers Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] A Mascot
heathmatlock wrote: Cute! I like it! Yea, it's cute. I don't like the formula, though: \x - x + x is just too trivial and not very Haskellish. Something higher order is the minimum requirement, IMO. The original (lambda knights) formula was cool: the fixed point operator is directly related to recursion, which is reflected in the picture that contains itself; note also that defining this operator requires an untyped language, so this fits LISP quite well (but not Haskell). What about the formula for function composition (f . g) x = f (g x) maybe together with its type (or maybe only the type) (.) :: (b - c) - (a - b) - a - c Extremely cool are GADTs, such as data Eq a b where Refl :: Eq a a Or, if you'd like something more obscure but still at the center of what Haskell is about, take the mother of all monads m = f = \k - m (\a - (f a) k) This is a formula I can spend a day contemplating and still wonder if I have _really_ understood it. And doesn't that properly reflect the depth and richness of Haskell? Cheers Ben On Mon, Nov 21, 2011 at 7:52 AM, Karol Samborski edv.ka...@gmail.comwrote: 2011/11/21 Karol Samborski edv.ka...@gmail.com: Hi all, This is my sister's proposition: http://origami.bieszczady.pl/images/The_Lamb_Da.png What do you think? Second version: http://origami.bieszczady.pl/images/The_Lamb_Da2.png Best, Karol Samborski ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] os.path.expanduser analogue
On the whole, the filepath package does an excellent job of providing basic path manipulation tools, one weakness is the inability to resolve ~/... style POSIX paths. Python implements this with os.path.expanduser. Perhaps a similar function might be helpful in filepath? Cheers, - Ben Possible (but untested) implementation expandUser :: FilePath - IO FilePath expandUser p = if ~/ `isPrefixOf` p then do u - getLoginName return $ u ++ drop 2 p else return p ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] os.path.expanduser analogue
On Sun, 20 Nov 2011 21:02:30 -0500, Brandon Allbery allber...@gmail.com wrote: On Sun, Nov 20, 2011 at 20:36, Ben Gamari bgamari.f...@gmail.com wrote: [Snip] Although arguably there should be some error checking. Thanks for the improved implementation. I should have re-read my code before sending as it wasn't even close to correct. Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] timezone-series, timezone-olson dependencies
Is there a reason why the current version of the timezone-series and timezone-olson packages depend on time1.3? With time 1.4 being widely used at this point this will cause conflicts with many packages yet my tests show that both packages work fine with time 1.4. Could we have this upper bound bumped to 1.5? Cheers, - Ben ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] zlib build failure on recent GHC
With GHC 1ece7b27a11c6947f0ae3a11703e22b7065a6b6c zlib fails to build, apparently due to Safe Haskell (bug 5610 [1]). The error is specifically, $ cabal install zlib Resolving dependencies... Configuring zlib-0.5.3.1... Preprocessing library zlib-0.5.3.1... Building zlib-0.5.3.1... [1 of 5] Compiling Codec.Compression.Zlib.Stream ( dist/build/Codec/Compression/Zlib/Stream.hs, dist/build/Codec/Compression/Zlib/Stream.o ) Codec/Compression/Zlib/Stream.hsc:857:1: Unacceptable argument type in foreign declaration: CInt When checking declaration: foreign import ccall unsafe static zlib.h inflateInit2_ c_inflateInit2_ :: StreamState - CInt - Ptr CChar - CInt - IO CInt Codec/Compression/Zlib/Stream.hsc:857:1: Unacceptable argument type in foreign declaration: CInt When checking declaration: foreign import ccall unsafe static zlib.h inflateInit2_ c_inflateInit2_ :: StreamState - CInt - Ptr CChar - CInt - IO CInt Codec/Compression/Zlib/Stream.hsc:857:1: Unacceptable result type in foreign declaration: IO CInt Safe Haskell is on, all FFI imports must be in the IO monad When checking declaration: foreign import ccall unsafe static zlib.h inflateInit2_ c_inflateInit2_ :: StreamState - CInt - Ptr CChar - CInt - IO CInt ... This is a little strange since, a) It's not clear why Safe Haskell is enabled b) The declarations in question seem to be valid Does this seem like a compiler issue to you? Cheers, - Ben [1] http://hackage.haskell.org/trac/ghc/ticket/5610 ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe