Re: [Haskell-cafe] Compiling arbitrary Haskell code
Whatever guarantees GHC offers (e.g. using Safe Haskell), I would always run things like these in a sandbox. It's much better for security to dissallow everything and then whitelist some things (e.g. let the sandbox communicate with the rest of the world in some limited way) than the other way around. Same goes for running untrusted code. On Fri, Oct 11, 2013 at 1:30 PM, Christopher Done chrisd...@gmail.comwrote: Is there a definitive list of things in GHC that are unsafe to _compile_ if I were to take an arbitrary module and compile it? E.g. off the top of my head, things that might be dangerous: * TemplateHaskell/QuasiQuotes -- obviously * Are rules safe? * #includes — I presume there's some security risk with including any old file? * FFI -- speaks for itself I'm interested in the idea of compiling Haskell code on lpaste.org, for core, rule firings, maybe even Th expansion, etc. When sandboxing code that I'm running, it's really easy if I whitelist what code is available (parsing with HSE, whitelisting imports, extensions). The problem of infinite loops or too much allocation is fairly straight-forwardly solved by similar techniques applied in mueval. SafeHaskell helps a lot here, but suppose that I want to also allow TemplateHaskell, GeneralizedNewtypeDeriving and stuff like that, because a lot of real code uses those. They only seem to be restricted to prevent cheeky messing with APIs in ways the authors of the APIs didn't want -- but that shouldn't necessarily be a security—in terms of my system—problem, should it? Ideally I'd very strictly whitelist which modules are allowed to be used (e.g. a version of TH that doesn't have runIO), and extensions, and then compile any code that uses them. I'd rather not have to setup a VM just to compile Haskell code safely. I'm willing to put some time in to investigate it, but if there's already previous work done for this, I'd appreciate any links. At the end of the day, there's always just supporting a subset of Haskell using SafeHaskell. I'm just curious about the more general case, for use-cases similar to my own. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Telling Cassava to ignore lines
Hi, It depends on what you mean by doesn't parse. From your message is assume the CSV is valid, but some of the actual values fails to convert (using FromField). There are a couple of things you could try: 1. Define a newtype for your field that calls runParser using e.g. the Int parser and if it fails, return some other value. I should probably add an Either instance that covers this case, but there's none there now. newtype MaybeInt = JustI !Int | ParseFailed instance FromField MaybeInt where parseField s = case runParser (parseField s) of Left err - pure ParseFailed Right (n :: Int) - JustI $ n (This is from memory, so I might have gotten some of the details wrong.) 2. Use the Streaming module, which lets you skip whole records that fails to parse (see the docs for the Cons constructor). -- Johan On Tue, Sep 17, 2013 at 6:43 PM, Andrew Cowie and...@operationaldynamics.com wrote: I'm happily using Cassava to parse CSV, only to discover that non-conforming lines in the input data are causing the parser to error out. let e = decodeByName y' :: Either String (Header, Vector Person) chugs along fine until line 461 of the input when parse error (endOfInput) at ... Ironically when my Person (ha) data type was all fields of :: Text it just worked, but now that I've specified one or two of the fields as Int or Float or whatever, it's mis-parsing. Is there a way to tell it to just ignore lines that don't parse, rather than it killing the whole run? Cassava understands skipping the *header* line (and indeed using it to do the -by-name field mapping). Otherwise the only thing I can see is going back to all the fields being :: Text, and then running over that as an intermediate structure and validating whether or not things parse to i.e. float. AfC Sydney ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal --enable-tests
I don't think so. Perhaps we should set one. What's your use case? Perhaps you could describe it in a new bug report at https://github.com/haskell/cabal/issues On Mon, Sep 9, 2013 at 7:29 PM, satvik chauhan mystic.sat...@gmail.comwrote: Hi cafe, I wanted to ask this as I couldn't find this in cabal documentation. Is there any CCP macro set when a package is configured with --enable-testing? If not is there a way to do that? -Satvik ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Cabal v1.18.0 released
I pasted your report into the bug tracker: https://github.com/haskell/cabal/issues/1478 I don't know if you're on GitHub or not so I could link the report to your user. On Thu, Sep 5, 2013 at 8:16 AM, Rogan Creswick cresw...@gmail.com wrote: I ran into another oddity due to old build artifacts today -- it was easy to fix, but very confusing; cabal repl was exiting with unrecognised command: repl. tl/dr; if you see this, delete the old 'dist' dir and re-run 'cabal configure'. Here's a snippit of my shell session to explain in more detail: $ cabal sandbox init ... $ cabal --version cabal-install version 1.18.0 using version 1.18.0 of the Cabal library $ cabal configure Resolving dependencies... Configuring pgftransform-0.0.0.1... $ cabal repl unrecognised command: repl (try --help) $ cabal --help ... Commands: ... buildCompile all targets or specific targets. repl Open an interpreter session for the given target. sandbox Create/modify/delete a sandbox. ... Note that cabal --version and cabal --help indicated that repl /was/ a valid command. The issue appears to be that an old dist directory was still hanging around, and (I suspect) the compiled setup.hs build program (which would have been built with an old Cabal) was causing the actual error. Deleting the dist dir and re-running cabal configure set everything right. --Rogan On Thu, Sep 5, 2013 at 6:18 AM, Yuri de Wit yde...@gmail.com wrote: It is easy enough to recreate the link manually or as easy to run cabal install again, but that is not the point here. The point is that it will bite the next dozen of unsuspecting users since, at first, they have no idea of what is going on. In any case, apologies for sending this in this thread as it doesn't seem the right forum to discuss it. On Thu, Sep 5, 2013 at 7:53 AM, Paolo Giarrusso p.giarru...@gmail.com wrote: On Wednesday, September 4, 2013 11:41:33 PM UTC+2, Yuri de Wit wrote: Thanks for all the hard work! If you see this in OSX (#1009) while installing cabal 1.18: Warning: could not create a symlink in /Users/lemao/Library/Haskell/bin for cabal because the file exists there already but is not managed by cabal. You can create a symlink for this executable manually if you wish. The executable file has been installed at /Users/user/Library/Haskell/ghc-7.6.3/lib/cabal-install-1.18.0/bin/cabal You will need to manually remove the pre-existing 1.16 links in ~/Library/Haskell/bin before installing again. Instead of installing again, you can even just recreate the symlink as mentioned by the message (if you know how to do that). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: Cabal v1.18.0 released
Hideyuki Tanaka was missing from the list of contributors (his patch was applied through me). His contribution made 'cabal update' faster! On Wed, Sep 4, 2013 at 2:11 PM, Johan Tibell johan.tib...@gmail.com wrote: Hi all, On behalf of the cabal maintainers and contributors I'm proud to announce the Cabal (and cabal-install) 1.18.0 release. To install run cabal update cabal install Cabal-1.18.0 cabal-install-1.18.0 With 854 commits since the last release there are two many improvements and bug fixes to list them here, but two highlights are: * Hermetic builds using sandboxes. This should reduce the number of dependency hell and broken package DB problems. * GHCi support. It's now much easier to use ghci when developing your packages, especially if those packages require preprocessors (e.g. hsc2hs). Here's how working on a package might look like using the new features: # Only once: cabal sandbox init cabal install --only-dependencies --enable-tests # Configure, build, and run tests: cabal test # now implies configure and build # Play around with the code in GHCi: cabal repl Mikhail wrote a bit more about the user visible changes on his blog: http://coldwa.st/e/blog/2013-08-21-Cabal-1-18.html For a complete list of changes run git log cabal-install-v1.16.0.2..cabal-install-v1.18.0 in the cabal repo or look at the GitHub compare page: https://github.com/haskell/cabal/compare/cabal-install-v1.16.0.2...cabal-install-v1.18.0 (only shows the last 250 commits). 57 people contributed to this release! 503 Mikhail Glushenkov 99 Johan Tibell 41 Duncan Coutts 39 Ian Lynagh 19 Brent Yorgey 19 Thomas Tuegel 18 Ben Millwood 16 Eyal Lotem 10 Thomas Dziedzic 7 Andres Loeh 6 John Wiegley 6 Benno Fünfstück 5 Gregory Collins 4 Herbert Valerio Riedel 4 Simon Hengel 3 Joachim Breitner 3 Luke Iannini 3 Bryan Richter 3 Richard Eisenberg 3 Tuncer Ayaz 3 Jens Petersen 2 Arun Tejasvi Chaganty 2 Bryan O'Sullivan 2 Eric Kow 2 Jookia 2 Paolo G. Giarrusso 2 Paolo Capriotti 1 Sönke Hahn 1 Yitzchak Gale 1 Albert Krewinkel 1 stepcut 1 Alexander Kjeldaas 1 Austin Seipp 1 Bardur Arantsson 1 Ben Doyle 1 Ben Gamari 1 Bram 1 Carter Tazio Schonwald 1 Clint Adams 1 Daniel Wagner 1 David Lazar 1 Erik Hesselink 1 Eugene Sukhodolin 1 Gabor Greif 1 Jack Henahan 1 Jason Dagit 1 Ken Bateman 1 Mark Lentczner 1 Masahiro Yamauchi 1 Merijn Verstraaten 1 Michael Thompson 1 Niklas Hambüchen 1 Oleksandr Manzyuk 1 Patrick Premont 1 Roman Cheplyaka 1 Sergei Trofimovich 1 Stephen Blackheath -- Johan, on behalf of the cabal maintainers and contributors. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: Cabal v1.18.0 released
Hi all, On behalf of the cabal maintainers and contributors I'm proud to announce the Cabal (and cabal-install) 1.18.0 release. To install run cabal update cabal install Cabal-1.18.0 cabal-install-1.18.0 With 854 commits since the last release there are two many improvements and bug fixes to list them here, but two highlights are: * Hermetic builds using sandboxes. This should reduce the number of dependency hell and broken package DB problems. * GHCi support. It's now much easier to use ghci when developing your packages, especially if those packages require preprocessors (e.g. hsc2hs). Here's how working on a package might look like using the new features: # Only once: cabal sandbox init cabal install --only-dependencies --enable-tests # Configure, build, and run tests: cabal test # now implies configure and build # Play around with the code in GHCi: cabal repl Mikhail wrote a bit more about the user visible changes on his blog: http://coldwa.st/e/blog/2013-08-21-Cabal-1-18.html For a complete list of changes run git log cabal-install-v1.16.0.2..cabal-install-v1.18.0 in the cabal repo or look at the GitHub compare page: https://github.com/haskell/cabal/compare/cabal-install-v1.16.0.2...cabal-install-v1.18.0 (only shows the last 250 commits). 57 people contributed to this release! 503 Mikhail Glushenkov 99 Johan Tibell 41 Duncan Coutts 39 Ian Lynagh 19 Brent Yorgey 19 Thomas Tuegel 18 Ben Millwood 16 Eyal Lotem 10 Thomas Dziedzic 7 Andres Loeh 6 John Wiegley 6 Benno Fünfstück 5 Gregory Collins 4 Herbert Valerio Riedel 4 Simon Hengel 3 Joachim Breitner 3 Luke Iannini 3 Bryan Richter 3 Richard Eisenberg 3 Tuncer Ayaz 3 Jens Petersen 2 Arun Tejasvi Chaganty 2 Bryan O'Sullivan 2 Eric Kow 2 Jookia 2 Paolo G. Giarrusso 2 Paolo Capriotti 1 Sönke Hahn 1 Yitzchak Gale 1 Albert Krewinkel 1 stepcut 1 Alexander Kjeldaas 1 Austin Seipp 1 Bardur Arantsson 1 Ben Doyle 1 Ben Gamari 1 Bram 1 Carter Tazio Schonwald 1 Clint Adams 1 Daniel Wagner 1 David Lazar 1 Erik Hesselink 1 Eugene Sukhodolin 1 Gabor Greif 1 Jack Henahan 1 Jason Dagit 1 Ken Bateman 1 Mark Lentczner 1 Masahiro Yamauchi 1 Merijn Verstraaten 1 Michael Thompson 1 Niklas Hambüchen 1 Oleksandr Manzyuk 1 Patrick Premont 1 Roman Cheplyaka 1 Sergei Trofimovich 1 Stephen Blackheath -- Johan, on behalf of the cabal maintainers and contributors. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building recent Cabal/cabal-install
Hi, Cabal 1.18 is still in the release candidate stage so it has in fact not been released yet. We could either bump the dependency on base to 4.8 before the 1.8 release or we could make a Cabal-1.8.0.1 release together with the GHC release that bumps the dependency. -- Johan On Wed, Aug 28, 2013 at 9:08 PM, Mateusz Kowalczyk fuuze...@fuuzetsu.co.ukwrote: Greetings café, There are some problems in Haddock to do with Template Haskell that I believe are being caused by Cabal. These were apparently addressed in 1.18 which came out recently. ‘Great!’, I thought. My problem is that I'm unsure how to use 1.18. I'm using GHC HEAD (well, 3 days old now) which is meant to come with Cabal 1.18 (the library) and in fact it seems to ship with it. Running ‘cabal --version’ however: cabal-install version 1.17.0 using version 1.17.0 of the Cabal library I'm unsure how to get it to use 1.18. I tried to build cabal-install from git but that just complains about dependencies not being possible to resolve: it depends on HTTP which depends on base 4.7 but 4.7 comes with GHC HEAD! I am baffled as to how I should go about using 1.18 and would love to hear from some people who have this up and running. -- Mateusz K. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Debugging ByteString and Data.Binary.Get memory usage
A good starting point is to estimate how much space you think the data should take using e.g. http://blog.johantibell.com/2011/06/memory-footprints-of-some-common-data.html If you do that, is the actual space usage close to what you expected? On Thu, Aug 29, 2013 at 5:35 PM, Kyle Hanson hanoo...@gmail.com wrote: OK I have a bunch of BSON documents that I convert to ByteStrings, put in a Map, and write to a socket based on the response. I noticed some high memory usage (in the GBs) so I decided to investigate. I simplified my problem into a small program that demonstrates clearer what is happening. I wrote two versions, one with a Lazy Map and Lazy ByteStrings and one with a Strict Map and Strict ByteStrings. Both share the same memory behavior (except the lazy BS one is faster) Here is the strict version: http://lpaste.net/92298 And here is the lazy version: http://lpaste.net/92299 I wrote this and compared the memory and speed behavior of ByteStrings generated by converting it from a BSON document and ByteStrings generated more purely. The length of the ByteString from a BSON document is 68k and the length of the pure BS is 70k. This is my weird memory behavior, both BSON and pure methods use the same amount of memory after inserting 10k of them (90mb) However when I go to lookup a value, the BSON Map explodes the memory to over 250mb. Even if I lookup just 1 value. Looking up any number of values in the pure BS keeps the memory usage stable (90mb). I am hoping someone can help me understand this. I have read some posts about Temporary ByteStrings causing memory issues but I don't know how to get started debugging. -- Kyle Hanson ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Ideas on a fast and tidy CSV library
As I mentioned, you want to use the Streaming (or Incremental) module. As the program now stands the call to `decode` causes 1.5 GB of CSV data to be read as a `Vector (Vector Int)` before any encoding starts. -- Johan On Wed, Aug 21, 2013 at 1:09 PM, Justin Paston-Cooper paston.coo...@gmail.com wrote: Dear All, I now have some example code. I have put it on: http://pastebin.com/D9MPmyVd . vectorBinner is simply of type Vector Int - Int. I am inputting a 1.5GB CSV on stdin, and would like vectorBinner to run over every single record, outputting results as computed, thus running in constant memory. My programme instead quickly approaches full memory use. Is there any way to work around this? Justin On 25 July 2013 17:53, Johan Tibell johan.tib...@gmail.com wrote: You can use the Incremental or Streaming modules to get more fine grained control over when new parsed records are produced. On Thu, Jul 25, 2013 at 11:02 AM, Justin Paston-Cooper paston.coo...@gmail.com wrote: I hadn't yet tried profiling the programme. I actually deleted it a few days ago. I'm going to try to get something new running, and I will report back. On a slightly less related track: Is there any way to use cassava so that I can have pure state and also yield CSV lines while my computation is running instead of everything at the end as would be with the State monad? On 23 July 2013 22:13, Johan Tibell johan.tib...@gmail.com wrote: On Tue, Jul 23, 2013 at 5:45 PM, Ben Gamari bgamari.f...@gmail.com wrote: Justin Paston-Cooper paston.coo...@gmail.com writes: Dear All, Recently I have been doing a lot of CSV processing. I initially tried to use the Data.Csv (cassava) library provided on Hackage, but I found this to still be too slow for my needs. In the meantime I have reverted to hacking something together in C, but I have been left wondering whether a tidy solution might be possible to implement in Haskell. Have you tried profiling your cassava implementation? In my experience I've found it's quite quick. If you have an example of a slow path I'm sure Johan (cc'd) would like to know about it. I'm always interested in examples of code that is not running fast enough. Send me a reproducible example (preferably as a bug on the GitHub bug tracker) and I'll take a look. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Ideas on a fast and tidy CSV library
You can use the Incremental or Streaming modules to get more fine grained control over when new parsed records are produced. On Thu, Jul 25, 2013 at 11:02 AM, Justin Paston-Cooper paston.coo...@gmail.com wrote: I hadn't yet tried profiling the programme. I actually deleted it a few days ago. I'm going to try to get something new running, and I will report back. On a slightly less related track: Is there any way to use cassava so that I can have pure state and also yield CSV lines while my computation is running instead of everything at the end as would be with the State monad? On 23 July 2013 22:13, Johan Tibell johan.tib...@gmail.com wrote: On Tue, Jul 23, 2013 at 5:45 PM, Ben Gamari bgamari.f...@gmail.com wrote: Justin Paston-Cooper paston.coo...@gmail.com writes: Dear All, Recently I have been doing a lot of CSV processing. I initially tried to use the Data.Csv (cassava) library provided on Hackage, but I found this to still be too slow for my needs. In the meantime I have reverted to hacking something together in C, but I have been left wondering whether a tidy solution might be possible to implement in Haskell. Have you tried profiling your cassava implementation? In my experience I've found it's quite quick. If you have an example of a slow path I'm sure Johan (cc'd) would like to know about it. I'm always interested in examples of code that is not running fast enough. Send me a reproducible example (preferably as a bug on the GitHub bug tracker) and I'll take a look. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Ideas on a fast and tidy CSV library
On Tue, Jul 23, 2013 at 5:45 PM, Ben Gamari bgamari.f...@gmail.com wrote: Justin Paston-Cooper paston.coo...@gmail.com writes: Dear All, Recently I have been doing a lot of CSV processing. I initially tried to use the Data.Csv (cassava) library provided on Hackage, but I found this to still be too slow for my needs. In the meantime I have reverted to hacking something together in C, but I have been left wondering whether a tidy solution might be possible to implement in Haskell. Have you tried profiling your cassava implementation? In my experience I've found it's quite quick. If you have an example of a slow path I'm sure Johan (cc'd) would like to know about it. I'm always interested in examples of code that is not running fast enough. Send me a reproducible example (preferably as a bug on the GitHub bug tracker) and I'll take a look. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Not working examples in GHC API documentation
I filed a bug a while back: http://ghc.haskell.org/trac/ghc/ticket/7752 Someone that understands the API needs to fix the doc. :) On Thu, Jul 18, 2013 at 7:58 PM, John Blackbox blackbox.dev...@gmail.com wrote: Hi! Please take a look here: http://www.haskell.org/haskellwiki/GHC/As_a_library The examples are not working. Even the simpelst one: import GHC import GHC.Paths ( libdir ) import DynFlags ( defaultLogAction ) main = defaultErrorHandler defaultLogAction $ do runGhc (Just libdir) $ do dflags - getSessionDynFlags setSessionDynFlags dflags target - guessTarget test_main.hs Nothing setTargets [target] load LoadAllTargets throws: $ ghc -package ghc Main.hs [1 of 1] Compiling Main ( Main.hs, Main.o ) Main.hs:6:25: Couldn't match type `DynFlags' with `[Char]' Expected type: DynFlags.FatalMessager Actual type: DynFlags.LogAction In the first argument of `defaultErrorHandler', namely `defaultLogAction' In the expression: defaultErrorHandler defaultLogAction In the expression: defaultErrorHandler defaultLogAction $ do { runGhc (Just libdir) $ do { dflags - getSessionDynFlags; setSessionDynFlags dflags; } } Main.hs:7:7: Couldn't match expected type `DynFlags.FlushOut' with actual type `IO SuccessFlag' In a stmt of a 'do' block: runGhc (Just libdir) $ do { dflags - getSessionDynFlags; setSessionDynFlags dflags; target - guessTarget test_main.hs Nothing; setTargets [target]; } In the second argument of `($)', namely `do { runGhc (Just libdir) $ do { dflags - getSessionDynFlags; setSessionDynFlags dflags; } }' In the expression: defaultErrorHandler defaultLogAction $ do { runGhc (Just libdir) $ do { dflags - getSessionDynFlags; setSessionDynFlags dflags; } } ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC - A Cabal Project
Sounds like a good idea. Go ahead and apply. :) On Tue, Apr 30, 2013 at 2:46 AM, Martin Ruderer martin.rude...@gmail.comwrote: Hi, I am proposing a GSoC project on Cabal. It aims to open up the dependency solver for debugging purposes. The details are here: https://gist.github.com/mr-/7995081f89cff38e9443 I would really like to hear what you think about it. Best regards, Martin ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Diving into the records swamp (possible GSoC project)
Hi Adam, Since we have already had *very* long discussions on this topic, I'm worried that I might open a can of worms be weighing in here, but the issue is important enough to me that I will do so regardless. Instead of endorsing one of the listed proposals directly, I will emphasize the problem, so we don't lose sight of it. The problem people run into *in practice* and complain about in blog posts, on Google+, or privately when we chat about Haskell over beer, is that they would like to write a record definition like this one: data Employee = Employee { id :: Int, name :: String } printId :: Employee - IO () printId emp = print $ id emp but since that doesn't work well in Haskell today due to name collisions, the best practice today is to instead write something like: data Employee = Employee { employeeId :: Int, employeeName :: String } printId :: Employee - IO () printId emp = print $ employeeId emp The downsides of the latter have been discussed elsewhere, but briefly they are: * Overly verbose when there's no ambiguity. * Ad-hoc prefix is hard to predict (i.e. sometimes abbreviations of the data type name are used). The important requirement, which might seem a bit obvious, is that any solution to this problem better not be *even more* verbose than the second code snippet above. If I understand the SORF proposal correctly, you would write: data Employee = Employee { id :: Int, name :: String } printId :: Employee - IO () printId emp = print $ emp.id Is that correct or do you have to replace 'Employee' with 'r { id :: Int }' in the type signature of 'printId'? The discussions about an overhauled record system also involve lots of talk about record sub-typing, extensible records, and other more advanced features. I'd like to point out that there doesn't seem to be a great demand for these features. They might be nice-to-haves or might fall out naturally from a solution to the namespacing problem above, but they are in fact not needed to solve the common problem people have with the Haskell record system. Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
Hi Ben, On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote: The Repa plugin will also do proper SIMD vectorisation for stream programs, producing the SIMD primops that Geoff recently added. Along the way it will brutally convert all operations on boxed/lifted numeric data to their unboxed equivalents, because I am sick of adding bang patterns to every single function parameter in Repa programs. How far is this plugin from being usable to implement a {-# LANGUAGE Strict #-} pragma for treating a single module as if Haskell was strict? Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
On Thu, Apr 25, 2013 at 10:30 PM, Andrew Cowie and...@operationaldynamics.com wrote: On Thu, 2013-04-25 at 21:15 -0700, Johan Tibell wrote: {-# LANGUAGE Strict #-} God, I would love this. Obviously the plugin approach could do it, but could not GHC itself just _not create thunks_ for things unless told to be lazy in the presence of such a pragma? [at which point, we need an annotation for laziness, instead of the annotation for strictness. We're not using ampersand for anything, are we? func Int - Thing - WorldPeace func a b = ... Ah, bikeshed, how we love thee] We already have ~ that's used to make lazy patterns. :) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails
On Thu, Apr 25, 2013 at 9:20 PM, Ben Lippmeier b...@ouroborus.net wrote: On 26/04/2013, at 2:15 PM, Johan Tibell wrote: Hi Ben, On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote: The Repa plugin will also do proper SIMD vectorisation for stream programs, producing the SIMD primops that Geoff recently added. Along the way it will brutally convert all operations on boxed/lifted numeric data to their unboxed equivalents, because I am sick of adding bang patterns to every single function parameter in Repa programs. How far is this plugin from being usable to implement a {-# LANGUAGE Strict #-} pragma for treating a single module as if Haskell was strict? There is already one that does this, but I haven't used it. http://hackage.haskell.org/package/strict-ghc-plugin It's one of the demo plugins, though you need to mark individual functions rather than the whole module (which would be straightforward to add). The Repa plugin is only supposed to munge functions using the Repa library, rather than the whole module. I guess what I was really hoping for was a plugin that rigorously defined what it meant to make the code strict at a source language level, rather than at a lets make all lets into cases Core level. :) ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Fwd: GSoC Project Proposal: Markdown support for Haddock
On Tue, Apr 9, 2013 at 12:40 PM, Joe Nash joen...@blackvine.co.uk wrote: I would be interested in discussing this project with a potential mentor if one happens to be reading. I'm a second year Computer Science student at the University of Nottingham, very interested in doing a haskell.org SoC project. Normally I would volunteer to mentor any project I propose, but my schedule doesn't allow for it this summer. Perhaps Mark Lentczner is interested. He did some Haddock work before. I'm happy to provide input on what I think the feature should be about (in fact, I will probably write a blog post about it, like I do every year before GSoC). -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [haskell.org Summer of Code 2013] We're In!
Thanks for working on this again this year! On Mon, Apr 8, 2013 at 12:50 PM, Edward Kmett ekm...@gmail.com wrote: We (haskell.org) have been officially accepted into the Google Summer of Code for 2013. We should show up in the mentoring organization list as soon as I get some information we need to finalize the listing. Shachaf Ben-Kiki has volunteered to help out as our backup org administrator this year. If you are thinking about joining up as a mentor or a student this year, now would be a good time to start brainstorming about project ideas! I'll follow up with more information as soon as the listing goes in. In the meantime there is the #haskell-gsoc channel on irc.freenode.net. Feel free to pester me (or Shachaf) with questions! -Edward Kmett ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock
Hi all, Haddock's current markup language leaves something to be desired once you want to write more serious documentation (e.g. several paragraphs of introductory text at the top of the module doc). Several features are lacking (bold text, links that render as text instead of URLs, inline HTML). I suggest that we implement an alternative haddock syntax that's a superset of Markdown. It's a superset in the sense that we still want to support linkifying Haskell identifiers, etc. Modules that want to use the new syntax (which will probably be incompatible with the current syntax) can set: {-# HADDOCK Markdown #-} on top of the source file. Ticket: http://trac.haskell.org/haddock/ticket/244 -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock
On Thu, Apr 4, 2013 at 9:49 AM, Johan Tibell johan.tib...@gmail.com wrote: I suggest that we implement an alternative haddock syntax that's a superset of Markdown. It's a superset in the sense that we still want to support linkifying Haskell identifiers, etc. Modules that want to use the new syntax (which will probably be incompatible with the current syntax) can set: {-# HADDOCK Markdown #-} Let me briefly argue for why I suggested Markdown instead of the many other markup languages out there. Markdown has won. Look at all the big programming sites out there, from GitHub to StackOverflow, they all use a superset of Markdown. It did so mostly (in my opinion) because it codified the formatting style people were already using in emails and because it was pragmatic enough to include HTML as an escape hatch. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock
Would it be too much to ask that a notation be used which has a formal syntax and a formal semantics? We will document our superset, sure. That's what others did as well. The point is using Markdown as the shared base. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] ANN: psqueue-benchmarks - benchmarks of priority queue implementations
I had a 5 second look at the PSQueue implementation and here's what I got so far: * fromList should use foldl'. * LTree should be spine strict (i.e. strict in the (LTree k p) fields). * Winner should be strict in the (LTree k p) field and probably in all other fields as well. This is a nice example showing that strict fields is the right default. If fields need to be lazy there should ideally be a comment explaining why that is needed (e.g. in the case of finger trees and lists). On Fri, Mar 29, 2013 at 9:53 AM, Niklas Hambüchen m...@nh2.me wrote: Hey Scott, I quickly tried your suggestion, plugging in foldr' from Data.Foldable and sprinkling a few seqs in some places, but it doesn't help the stack overflow. On Fri 29 Mar 2013 16:23:55 GMT, Scott Dillard wrote: I do not know why it overflows. It's been a while, but isn't the answer usually too much laziness? Maybe try changing the foldr in fromList to foldr'? I would try it out quickly but do not have ghc installed on any computers here. I am happy start a repo for this library, but there is not much history to import so anyone else may do it. I'm not sure how hackage upload permissions work... I guess I just change the maintainer field in the .cabal file from myself to someone else...? Any volunteers? On Thu, Mar 28, 2013 at 11:16 PM, Kazu Yamamoto k...@iij.ad.jp mailto:k...@iij.ad.jp wrote: Hi Niklas, * PSQueue throws a stack space overflow if you try to put in 10 * Ints A slightly different implementation is used in GHC: https://github.com/ghc/packages-base/blob/master/GHC/Event/PSQ.hs Could you test it? If this code also has the same problem, I need to fix it. --Kazu ___ Haskell mailing list hask...@haskell.org http://www.haskell.org/mailman/listinfo/haskell ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Associated types for number coercion
On Tue, Mar 19, 2013 at 3:58 PM, Christopher Done chrisd...@gmail.comwrote: From the paper Fun with Type Funs, it's said: One compelling use of such type functions is to make type coercions implicit, especially in arithmetic. Suppose we want to be able to write add a b to add two numeric values a and b even if one is an Integer and the other is a Double (without writing fromIntegral explicitly). And then an Add class is defined which can dispatch at the type-level to appropriate functions which resolve two types into one, with a catch-all case for Num. Has anyone put this into a package, for all common arithmetic operations? I would use it. Doing arithmetic stuff in Haskell always feels labored because of having constantly convert between number types. I prefer the current way (which is interestingly what Go chose as well). With implicit casts it's easy to shoot yourself in the foot e.g. when doing bit-twiddling. These two are different f :: Word8 - Int - Word32 f w8 n = fromIntegral (w8 `shiftL` n) f' :: Word8 - Int - Word32 f' w8 n = (fromIntegral w8) `shiftL` n ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Fwd: Now Accepting Applications for Mentoring Organizations for GSoC 2013
[bcc: hask...@haskell.org] We should make sure that we apply for Google Summer of Code this year as well. It's been very successful in the previous year, where we have gotten several projects funded every year. -- Johan -- Forwarded message -- From: Carol Smith car...@google.com Date: Mon, Mar 18, 2013 at 12:00 PM Subject: Now Accepting Applications for Mentoring Organizations for GSoC 2013 To: Google Summer of Code Announce google-summer-of-code-annou...@googlegroups.com Hi all, We're pleased to announce that applications for mentoring organizations for Google Summer of Code 2013 are now being accepted [1]. If you'd like to apply to be a mentoring organization you can do so via Melange [2]. If you have questions about how to use Melange, please see our User's Guide [3]. Please note that the application period [4] closes on 29 March at 19:00 UTC [5]. We will not accept any late applications for any reason. [1] - http://google-opensource.blogspot.com/2013/03/mentoring-organization-applications-now.html [2] - http://www.google-melange.com [3] - http://en.flossmanuals.net/melange/ [4] - http://www.google-melange.com/gsoc/events/google/gsoc2013 [5] - http://goo.gl/xmQMJ Cheers, Carol -- You received this message because you are subscribed to the Google Groups Google Summer of Code Announce group. To unsubscribe from this group and stop receiving emails from it, send an email to google-summer-of-code-announce+unsubscr...@googlegroups.com. To post to this group, send email to google-summer-of-code-annou...@googlegroups.com. Visit this group at http://groups.google.com/group/google-summer-of-code-announce?hl=en. For more options, visit https://groups.google.com/groups/opt_out. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] GSOC application level
Hi Mateusz, On Wed, Mar 6, 2013 at 4:58 PM, Mateusz Kowalczyk fuuze...@fuuzetsu.co.ukwrote: Can someone that has been around for a bit longer comment on what level of experience with Haskell and underlying concepts is usually expected from candidates? Are applications discarded simply based on the applicant not having much previous experience in the target area? What is the level of the competition for places on the projects? We don't have a fix bar for things you need to known when you apply. Rather we try to guess whether the student can accomplish the project he/she is applying for, based on whatever evidence we have e.g. contribution to other projects, released libraries on Hackage, and other forms of community participation. Since we typically have more proposals than slots we will rank students both based on how impactful we think the project will be and how likely we think it is that the student will proceed. Both these qualities map onto a single number that we use to stack rank proposals. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Concurrency performance problem
On Mon, Mar 4, 2013 at 11:39 AM, Łukasz Dąbek sznu...@gmail.com wrote: Thank you for your help! This solved my performance problem :) Anyway, the second question remains. Why performance of single threaded calculation is affected by RTS -N parameter. Is GHC doing some parallelization behind the scenes? I believe it's because -N makes GHC use the threaded RTS, which is different from the non-threaded RTS and has some overheads therefore. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] The state of binary (de)serialization
On Tue, Feb 26, 2013 at 11:17 PM, Vincent Hanquez t...@snarc.org wrote: On Mon, Feb 25, 2013 at 11:59:42AM -0800, Johan Tibell wrote: - cereal can output a strict bytestring (runPut) or a lazy one (runPutLazy), whilst binary only outputs lazy ones (runPut) The lazy one is more general and you can use toStrict (from bytestring) to get a strict ByteString from a lazy one, without loss of performance. Two major problems of lazy bytestrings is that: * you can't pass it to a C bindings easily. * doing IO with it without rewriting the chunks, can sometimes (depending how the lazy bytestring has been produced) result in a serious degradation of performance calling syscalls on arbitrary and small chunks (e.g. socket's 'send'). Personally, i also like the (obvious) stricter behavior of strict bytestring. My point was rather that all cereal does for you is to concat the lazy chunks it already has to a strict bytestring before returning them. If you want that behavior with binary just call concat yourself. The benefit of not concatenating by default is that it costs O(n) time, which you might avoid if you can consume the lazy bytestring directly (e.g. through writev). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] The state of binary (de)serialization
On Mon, Feb 25, 2013 at 4:30 AM, Nicolas Trangez nico...@incubaid.comwrote: - cereal supports chunk-based 'partial' parsing (runGetPartial). It looks like support for this is introduced in recent versions of 'binary' as well (runGetIncremental) Yes. Binary now support an incremental interface. We intend to make sure binary has all the same functionality as cereal. We'd like to move away from having two packages if possible and since binary has the larger installed user base we're trying to make that the go-to package. - cereal can output a strict bytestring (runPut) or a lazy one (runPutLazy), whilst binary only outputs lazy ones (runPut) The lazy one is more general and you can use toStrict (from bytestring) to get a strict ByteString from a lazy one, without loss of performance. - Next to binary and cereal, there's bytestring's Builder interface for serialization, and Simon Meier's blaze-binary prototype Simon's builder (originally developed in blaze-binary) has been merged into the bytestring package. In the future binary will just re-export that builder. There are some blog posts and comments out there about merging cereal and binary, is this what's the goal/going on (cfr runGetIncremental)? It's most definitely the goal and it's basically done. The only thing I don't think we'll adopt from cereal is the instances from container types. In my use-case I think using Builder instead of binary/cereal's PutM monad shouldn't be a major problem. Is this advisable performance-wise? You can go ahead and use the builder directly if you like. Overall: what's the advised future-proof strategy of handling binary (de)serialization? Use binary or the builder from bytestring whenever you can. Since the builder in bytestring was recently added you might have to fall back to blaze-builder if you believe your users can't rely on the latest version of bytestring. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: lazy-csv - the fastest and most space-efficient parser for CSV
On Mon, Feb 25, 2013 at 2:32 PM, Don Stewart don...@gmail.com wrote: Cassava is quite new, but has the same goals as lazy-csv. Its about a year old now - http://blog.johantibell.com/2012/08/a-new-fast-and-easy-to-use-csv-library.html I know Johan has been working on the benchmarks of late - it would be very good to know how the two compare in features I whipped together a quick benchmark: https://github.com/tibbe/cassava/blob/master/benchmarks/Benchmarks.hs To run, check out the cassava repo on GitHub and run: cabal configure --enable-benchmarks cabal build cabal bench Here are the results (all the normal caveats for benchmarking applies): benchmarking positional/decode/presidents/without conversion mean: 62.85965 us, lb 62.56705 us, ub 63.26101 us, ci 0.950 std dev: 1.751446 us, lb 1.371323 us, ub 2.295576 us, ci 0.950 benchmarking positional/decode/streaming/presidents/without conversion mean: 93.81925 us, lb 91.14701 us, ub 98.19217 us, ci 0.950 std dev: 17.20842 us, lb 11.58690 us, ub 23.41786 us, ci 0.950 benchmarking comparison/lazy-csv mean: 133.2609 us, lb 132.4415 us, ub 135.3085 us, ci 0.950 std dev: 6.193178 us, lb 3.123661 us, ub 12.83148 us, ci 0.950 The two first set of numbers are for cassava (in the all-at-once vs streaming mode). The last set is for lazy-csv. The feature sets of the two libraries are quite different. Both do basic CSV parsing (with some extensions). * lazy-csv parses CSV data to something akin to [[ByteString]], but with a heavy focus on error recovery and precise error messages. * cassava parses CSV data to [a], where a is a user-defined type that represents a CSV record. There are options to recover from *type conversion* errors, but not from malformed CSV. cassava has several parsing modes: incremental for parsing interleaved with I/O, streaming for lazy parsing (with or without I/O), and all-at-once parsing for when you want to hold all the data in memory. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] The state of binary (de)serialization
On Mon, Feb 25, 2013 at 4:51 PM, Alexander Solla alex.so...@gmail.comwrote: On Mon, Feb 25, 2013 at 11:59 AM, Johan Tibell johan.tib...@gmail.comwrote: There are some blog posts and comments out there about merging cereal and binary, is this what's the goal/going on (cfr runGetIncremental)? It's most definitely the goal and it's basically done. The only thing I don't think we'll adopt from cereal is the instances from container types. Why not? Those instances are useful. Without instances defined in binary/cereal, pretty much every Happstack (or, better said, every ixset/acidstate/safecopy stack) user will have to have orphan instances. I will have to give a bit more context to answer this one. After the binary package was created we've realized that it should really have been two packages: * One package for serialization and deserialization of basic types, that have a well-defined serialization format even outside the package e.g. little and big endian integers, IEEE floats, etc. This package would correspond to Data.Binary.Get, Data.Binary.Builder, and Data.Binary.Put. * One package that defines a particular binary format useful for serializing arbitrary Haskell values. This package would correspond to Data.Binary. For the latter we need to decide what guarantees we make. For example, is the format stable between releases? Is the format public (such that other libraries can parse the output of binary)? Right now these two questions are left unanswered in both binary and cereal, making those packages less useful. Before we answer those questions we don't want to 1) add more dependencies to binary and 2) define serialization formats that we might break in the next release. So perhaps once we've settled these issues we'll include instances for containers. Also, cereal has a generic instance. Will the new binary? That sounds reasonable. If someone sends a pull request Lennart or I will review and merge it. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] What magic has the new IO manager done to improve performance ?
Hi, On Saturday, February 16, 2013, yi huang wrote: I' m curious about the design and trade offs behind the new IO manager. I see two changes from the code: 1. Run IO manager thread on each capability. 2. Use ONESHOT flag to save a system call. Is there other interesting things to know? Is it possible to use epoll's ET mode to save even more system calls? Andreas and Kazu (CCed) would know more. In addition to the things you mentioned then parallel I/O manager also uses lock striping and is smarter bout when it makes blocking system calls. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Fwd: Google Summer of Code 2013
Hi all, Summer of code has always been a good way for us to get some important work, that no one has time to do, done. I encourage everyone to come up with good summer of code projects so we have a good number when the time for students to apply comes around. Empirically projects that focus on existing infrastructure (e.g. Cabal) work well. They limit the scope enough for students to 1) make something that's impacts lots of people and 2) doesn't give them too much rope*. * Building something good from scratch requires lots of experience, something most students don't have, almost per definition. -- Johan -- Forwarded message -- From: Carol Smith car...@google.com Date: Mon, Feb 11, 2013 at 11:02 AM Subject: Google Summer of Code 2013 To: Google Summer of Code Announce google-summer-of-code-annou...@googlegroups.com Hi all, We're pleased to announce that Google Summer of Code will be happening for its ninth year this year. Please check out the blog post [1] about the program and read the FAQs [2] and Timeline [3] on Melange for more information. [1] - http://google-opensource.blogspot.com/2013/02/flip-bits-not-burgers-google-summer-of.html [2] - http://www.google-melange.com/gsoc/document/show/gsoc_program/google/gsoc2013/help_page [3] - http://www.google-melange.com/gsoc/events/google/gsoc2013 Cheers, Carol -- You received this message because you are subscribed to the Google Groups Google Summer of Code Announce group. To unsubscribe from this group and stop receiving emails from it, send an email to google-summer-of-code-announce+unsubscr...@googlegroups.com. To post to this group, send email to google-summer-of-code-annou...@googlegroups.com. Visit this group at http://groups.google.com/group/google-summer-of-code-announce?hl=en. For more options, visit https://groups.google.com/groups/opt_out. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cabal-dev add-source
On Fri, Feb 8, 2013 at 9:53 AM, Blake Rain blake.r...@gmail.com wrote: You need to call cabal-dev add-source on P1 again to copy over the sdist, then do a cabal-dev install. See notes under Using a sandbox-local Hackage on https://github.com/creswick/cabal-dehttps://github.com/creswick/cabal-dev With the new cabal sandboxing (due in 1.18) this won't be necessary as we create a link to the repo, instead of installing a copy. We will rebuild the linked repo as needed. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cabal-dev add-source
On Fri, Feb 8, 2013 at 10:07 AM, JP Moresmau jpmores...@gmail.com wrote: Johan, thanks, that brings me to a point that I wanted to raise. I'm playing with cabal-dev because users have asked me to add support for it in EclipseFP (so projects could have their own sandbox and have dependencies between projects without polluting the main package databases). It is worth it, or should I just wait for cabal 1.18 and use the sandboxing facility? Or will the two work similarly enough that supporting both will be easy? Does the sandboxing in cabal means that tools like cabal-dev are going to get deprecated? I think they will be similar enough that you could easily port the code. The new cabal sandboxing will work as follows: cabal sandbox --init cabal add-source dir and then you use cabal commands like normal (e.g. configure, build, test). No installing necessary. I cannot speak for the cabal-dev developers. We do intend to support a superset of the cabal-dev functionality eventually. What we're missing now is ghci support. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cabal-dev add-source
On Fri, Feb 8, 2013 at 10:24 AM, Ozgun Ataman ozata...@gmail.com wrote: Which, thanks to Johan's help yesterday, can still be worked around (for now) by starting ghci with: ghci -package-conf ./cabal-sandbox/your-package-conf-folder-here/ You can indeed do this. For real ghci support in cabal we need to also pass -package flags, C libraries, etc to ghci. That's why it's not done yet. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Substantial (1:10??) system dependencies of runtime performance??
On Sat, Feb 2, 2013 at 5:14 PM, Ozgun Ataman ozata...@gmail.com wrote: If you are doing row-by-row transformations, I would recommend giving a try to my csv-conduit or csv-enumerator packages on Hackage. They were designed with constant space operation in mind, which may help you here. If you're keeping an accumulator around, however, you may still run into issues with too much laziness. The cassava package also has a Streaming and an Incremental module for constant space parsing. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Heads up: planned removal of String instances from HTTP package
On Tue, Jan 29, 2013 at 2:15 PM, Ganesh Sittampalam gan...@earth.li wrote: tl;dr: I'm planning on removing the String instances from the HTTP package. This is likely to break code. Obviously it will involve a major version bump. The basic reason is that this instance is rather broken in itself. A String ought to represent Unicode data, but the HTTP wire format is bytes, and HTTP makes no attempt to handle encoding. This was discussed on the libraries@ list a while back, but I'm happy to discuss further if there's a general feeling that this is a bad thing to do: http://www.haskell.org/pipermail/libraries/2012-September/018426.html I will probably upload the new version in a week or two. I think it's the right thing to do. Providing a little upgrade guide should help things to go smoother. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Handling exceptions or gracefully releasing resources
Hi, The pattern is essentially the same as in imperative languages; every allocation should involve a finally clause that deallocates the resource. On Tue, Jan 29, 2013 at 2:59 PM, Thiago Negri evoh...@gmail.com wrote: Should I put `Control.Exception.finally` on every single line of my finalizers? I'm not sure what you're asking here. If your finally clause tries to call close, you don't have to catch exceptions raise by close (what would you do with them anyway). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Mac os x (Intel, 10.6.8) problem compiling yesod-core-1.1.7.1
Adding Bryan, who wrote this code. On Mon, Jan 28, 2013 at 1:23 AM, jean-christophe mincke jeanchristophe.min...@gmail.com wrote: Hello GHC version : 7.4.2 When I do a cabal-dev instal yesod-core, I get the following error: Loading package blaze-builder-conduit-0.5.0.3 ... linking ... done. Loading package hashable-1.2.0.5 ... linking ... ghc: lookupSymbol failed in resolveImports /Users/V3/windev/Haskell/Z/cabal-dev//lib/HShashable-1.2.0.5.o: unknown symbol `_hashable_siphash24_sse2' ghc: unable to load package `hashable-1.2.0.5' Has anyone encountered the same problem? Thank you J-C ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Space leaks in function that uses Data.Vector.Mutable
Hi! You have to look outside the place function, which is strict enough. I would look for a call to unsafeWrite that doesn't evaluate it's argument before writing it into the vector. Perhaps you're doing something like: MV.unsafeWrite (i + 1, ...) Since tuples are lazy the i + 1 will be stored as a thunk. I recommend doing: data DescriptiveName a = DescriptiveName {-# UNPACK #-} !Int a and using a MV.MVector (PrimState m) (DescriptiveName t) if speed is really of the essence. Aside: You can't optimize place slightly by: * Making it strict in val1, and * Making it inline. The reason you want it to inline* is that's the function is polymorphic and inlining it at a call site when you know if you're working in IO and ST will improve performance. Here's the slightly optimized version: place :: (PrimMonad m) = MV.MVector (PrimState m) (Int, t) - (Int, t) - Int - m () place v max@(!val1,_) i = place' i where place' i = do let j = i - 1 if j 0 then return () else do curr@(val2, _) - MV.unsafeRead v j if val2 val1 then do MV.unsafeWrite v j max MV.unsafeWrite v i curr place' j else return () {-# INLINE place #-} * It should be enough to write two SPECIALIZE pragmas, one for IO and one for ST, but GHC doesn't seem to like that for some reason: /tmp/Test.hs:24:1: Warning: RULE left-hand side too complicated to desugar (place @ (ST s) @ t ($fPrimMonadST @ s ($fMonadST @ s))) `cast` ... /tmp/Test.hs:25:1: Warning: RULE left-hand side too complicated to desugar (place @ IO @ t $fPrimMonadIO) `cast` ... Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Example programs with ample use of deepseq?
On Mon, Jan 7, 2013 at 4:06 AM, Joachim Breitner m...@joachim-breitner.de wrote: I’m wondering if the use of deepseq to avoid unwanted lazyness might be a too large hammer in some use cases. Therefore, I’m looking for real world programs with ample use of deepseq, and ideally easy ways to test performance (so preferably no GUI applications). I never use deepseq, except when setting up benchmark data where it's a convenient way to make sure that the data is evaluated before the benchmark is run. When removing space leaks you want to avoid creating the thunks in the first place, not remove them after the fact. Consider a leak caused by a list of N thunks. Even if you deepseq that list to eventually remove those thunks, you won't lower your peak memory usage if the list was materialized at some point. In addition, by not creating the thunks in the first place you avoid some allocation costs. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How is default method signatures supposed to be used together with Generics
On Tue, Dec 11, 2012 at 9:31 PM, dag.odenh...@gmail.com dag.odenh...@gmail.com wrote: The practice seems to be to not export it, but maybe it would be a better practice to export it. That way it can work without DefaultSignatures too, and if you use the generic-deriving package it could work with zero extensions or GHC-specific dependencies. Are you adding support to hashable after all? I'd love that, but thought you decided against it because of the clash with the existing defaults. We did a major redesign of the library (without affecting users much). This meant that hash became a top-level function and hashWithSalt the only method of the Hashable class. As it's now the only method we don't have a default implementation for it anymore (except the default method signature). -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] containers license issue
On Wed, Dec 12, 2012 at 10:40 AM, Clark Gaebel cgae...@uwaterloo.ca wrote: I just did a quick derivation from http://graphics.stanford.edu/~seander/bithacks.html#RoundUpPowerOf2 to get the highest bit mask, and did not reference FXT nor the containers implementation. Here is my code: highestBitMask :: Word64 - Word64 highestBitMask x1 = let x2 = x1 .|. x1 `shiftR` 1 x3 = x2 .|. x2 `shiftR` 2 x4 = x3 .|. x3 `shiftR` 4 x5 = x4 .|. x4 `shiftR` 8 x6 = x5 .|. x5 `shiftR` 16 x7 = x6 .|. x6 `shiftR` 32 in x7 `xor` (x7 `shiftR` 1) This code is hereby released into the public domain. Problem solved. I will integrate this into containers later today. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] containers license issue
On Wed, Dec 12, 2012 at 12:09 PM, Clark Gaebel cgae...@uwaterloo.ca wrote: Possibly. I tend to trust GHC's strictness analyzer until proven otherwise, though. Feel free to optimize as necessary. The GHC strictness analyzer will have no troubles with this. Since the return type is Word64, there's no place for thunks to hide as Word64 is defined as: data Word64 = W64# Word# -- on 64-bit archs or some such. Since Word# is unlifted it cannot be a pointer (to a thunk). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] containers license issue
On Wed, Dec 12, 2012 at 12:18 PM, Dmitry Kulagin dmitry.kula...@gmail.com wrote: Clark, Johan, thank you! That looks like perfect solution to the problem. Clean-room reimplementation merged and released as 0.5.2.0. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] containers license issue
On Wed, Dec 12, 2012 at 5:38 PM, Michael Orlitzky mich...@orlitzky.com wrote: On 12/12/2012 08:15 PM, Johan Tibell wrote: On Wed, Dec 12, 2012 at 12:18 PM, Dmitry Kulagin dmitry.kula...@gmail.com wrote: Clark, Johan, thank you! That looks like perfect solution to the problem. Clean-room reimplementation merged and released as 0.5.2.0. Not even a little bit clean-room: he posted the source code that he was going to reimplement like two hours earlier, and had obviously read it. Clean-room was clearly a bit over enthusiastic. He said he didn't use the other code as a reference but instead the bithacks reference, which is public domain. I'm comfortable enough with this. I wasn't particularly worried about the prior implementation either, as it don't think (as a non-lawyer) that it will hold up as copyrightable in court due to its trivial nature and the presence of prior art (this is a standard bit-twiddling algorithm). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] How is default method signatures supposed to be used together with Generics
Hi, I noticed that you're not required to export the types mentioned in the default method signature. For example, you could have: default hashWithSalt :: (Generic a, GHashable (Rep a)) = Int - a - Int hashWithSalt salt = ghashWithSalt salt . from and not export the GHashable class. However, if users try to define an instance of Hashable but forget to derive Generic: data Foo a = Foo a String deriving (Eq) -- Oops, forgot Generic here instance (Hashable a) = Hashable (Foo a) they get a pretty bad error message: Test.hs:10:10: Could not deduce (Generic (Foo a), Data.Hashable.Class.GHashable (GHC.Generics.Rep (Foo a))) arising from a use of `Data.Hashable.Class.$gdmhashWithSalt' from the context (Hashable a) bound by the instance declaration at Test.hs:10:10-41 Possible fix: add (Generic (Foo a), Data.Hashable.Class.GHashable (GHC.Generics.Rep (Foo a))) to the context of the instance declaration or add instance declarations for (Generic (Foo a), Data.Hashable.Class.GHashable (GHC.Generics.Rep (Foo a))) In the expression: (Data.Hashable.Class.$gdmhashWithSalt) In an equation for `hashWithSalt': hashWithSalt = (Data.Hashable.Class.$gdmhashWithSalt) In the instance declaration for `Hashable (Foo a)' Exporting GHashable would help a little bit in that users would at least know what this GHashable class that the error talks about is. What's best practice? -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] The end of an era, and the dawn of a new one
On Wed, Dec 5, 2012 at 12:37 PM, David Terei davidte...@gmail.com wrote: I have always considered the LLVM code generator my responsibility and will continue to do so. I don't seem to find the time to make improvements to it but make sure to keep it bug free and working with the latest LLVM releases. So if others want are interested in working on it then there is plenty of room for that but I'll continue to maintain it and be responsible for it at the least. I will maintain the I/O manager as per usual, probably together with Bryan and Andreas (but I cannot speak for them of course). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is it possible to have constant-space JSON decoding?
Hi Oleg, On Tue, Dec 4, 2012 at 9:13 PM, o...@okmij.org wrote: I am doing, for several months, constant-space processing of large XML files using iteratees. The file contains many XML elements (which are a bit complex than a number). An element can be processed independently. After the parser finished with one element, and dumped the related data, the processing of the next element starts anew, so to speak. No significant state is accumulated for the overall parsing sans the counters of processed and bad elements, for statistics. XML is somewhat like JSON, only more complex: an XML parser has to deal with namespaces, parsed entities, CDATA sections and the other interesting stuff. Therefore, I'm quite sure there should not be fundamental problems in constant-space parsing of JSON. BTW, the parser itself is described there http://okmij.org/ftp/Streams.html#xml It certainly is possible (using a SAX style parser). What you can't have (I think) is a function: decode :: FromJSON a = ByteString - Maybe a and constant-memory parsing at the same time. The return type here says that we will return Nothing if parsing fails. We can only do so after looking at the whole input (otherwise how would we know if it's malformed). The use cases aeson was designed for (which I bet is the majority use case) is parsing smaller messages sent over the network (i.e. in web service APIs) so this is the only mode of parsing it supplies. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] RFC: Changes to Travis CI's Haskell support
On Mon, Dec 3, 2012 at 1:04 AM, Simon Hengel s...@typeful.net wrote: I think the right thing to do is: install: - cabal install --only-dependencies --enable-tests script: - cabal configure --enable-tests cabal build cabal test Please let me know if you think there are better ways to do it, or if you see any issues. This is the right thing to do. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] To my boss: The code is cool, but it is about 100 times slower than the old one...
Hi Felix, On Thu, Nov 29, 2012 at 10:09 AM, Fixie Fixie fixie.fi...@rocketmail.com wrote: The problem seems to be connected to lazy loading, which makes my programs so slow that I really can not show them to anyone. I have tried all tricks in the books, like !, seq, non-lazy datatypes... My advice usually goes like this: 1. Use standard, high-performance libraries (I've made a list of high quality libraries at https://github.com/tibbe/haskell-docs/blob/master/libraries-directory.md). 2. Make your data type fields strict. 3. Unpack primitive types (e.g. Int, Word, Float, Double). 4. Reduce allocation in tight loops (e.g. avoid creating lots of intermediate lists). I always do 1-3, but only do 4 when it's really necessary (e.g. in the inner loop of some machine learning algorithm). Rarely is performance issues due to lacking bang patterns on functions (although there are cases, e.g. when writing recursive functions with accumulators, where you need one). I was poking around to see if this had changed, then I ran into this forum post: http://stackoverflow.com/questions/9409634/is-indexing-of-data-vector-unboxed-mutable-mvector-really-this-slow The last solution was a haskell program which was in the 3x range to C, which I think is ok. This was in the days of ghc 7.0 I then tried compile the programs myself (ghc 7.4.1), but found that now the C program now was more that 100x faster. The ghc code was compiled with both O2 and O3, giving only small differences on my 64-bit Linux box. So it seems something has changed - and even small examples are still not safe when it comes to the lazy-monster. It reminds me of some code I read a couple of years ago where one of the Simons actually fired off a new thread, to make sure a variable was realized. Note that the issues in the blog post are not due to laziness (i.e. there are no space leaks), but due to the code being more polymorphic than the C code, causing extra allocation and indirection. A sad thing, since I am More that willing to go for Haskell if proves to be usable. If anyone can see what is wrong with the code (there are two haskell versions on the page, I have tried the last and fastest one) it would also be interesting. What is your experience, dear haskellers? To me it seems this beautiful language is useless without a better lazy/eager-analyzer. It's definitely possible to write fast Haskell code (as some Haskell programmers manage to do so consistently), but I appreciate that it's harder than it should be. In my opinion the major thing missing is a good text on how to write fast Haskell code and some tweaks to the compiler (e.g. unbox strict primitive fields like Int by default). Hope this helps. Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] To my boss: The code is cool, but it is about 100 times slower than the old one...
Ack, it seems like you're running into one of these bugs (all now fixed, but I don't know in which GHC version): http://hackage.haskell.org/trac/ghc/search?q=doubleFromInteger ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] To my boss: The code is cool, but it is about 100 times slower than the old one...
On Thu, Nov 29, 2012 at 1:00 PM, Fixie Fixie fixie.fi...@rocketmail.com wrote: The program seems to take around 6 seconds on my linux-box, while the c version goes for 0.06 sekcond. That is really some regression bug :-) Anyone with a more recent version thatn 7.4.1? On 7.4.2: $ time ./c_test ... real0m0.145s user0m0.040s sys 0m0.003s $ time ./Test ... real0m0.234s user0m0.220s sys 0m0.006s Both compiled with -O2. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] To my boss: The code is cool, but it is about 100 times slower than the old one...
On Thu, Nov 29, 2012 at 1:23 PM, Fixie Fixie fixie.fi...@rocketmail.com wrote: That's really an argument for upgrading to 7.4.2 :-) Another reason for doing things with haskell is this mailing list. FYI I'm still looking into this issue as I'm not 100% happy with the code GHC generates. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Vedr: To my boss: The code is cool, but it is about 100 times slower than the old one...
On Thu, Nov 29, 2012 at 1:32 PM, Daniel Fischer daniel.is.fisc...@googlemail.com wrote: We have an unpleasant regression in comparison to 7.2.* and the 7.4.* were slower than 7.6.1 is, but it's all okay here (not that it wouldn't be nice to have it faster still). Are you on a 32-bit system? This version works around the Word-Double conversion bug and shows good performance: (Always compile with -Wall, it tells you if some arguments are defaulted to slow Integers, instead of fast Ints.) {-# LANGUAGE CPP, BangPatterns, MagicHash #-} module Main (main) where #define VDIM 100 #define VNUM 10 import Control.Monad.ST import Data.Array.Base import Data.Array.ST import Data.Bits import GHC.Word import GHC.Exts prng :: Word - Word prng w = w' where w1 = w `xor` (w `shiftL` 13) w2 = w1 `xor` (w1 `shiftR` 7) w' = w2 `xor` (w2 `shiftL` 17) type Vec s = STUArray s Int Double kahan :: Vec s - Vec s - ST s () kahan s c = do let inner !w j | j VDIM = do cj - unsafeRead c j sj - unsafeRead s j let y = word2Double w - cj t = sj + y w' = prng w unsafeWrite c j ((t-sj)-y) unsafeWrite s j t inner w' (j+1) | otherwise = return () outer i | i VNUM = inner (fromIntegral i) 0 outer (i + 1) | otherwise = return () outer (0 :: Int) calc :: ST s (Vec s) calc = do s - newArray (0,VDIM-1) 0 c - newArray (0,VDIM-1) 0 kahan s c return s main :: IO () main = print . elems $ runSTUArray calc word2Double :: Word - Double word2Double (W# w) = D# (int2Double# (word2Int# w)) On my (64-bit) machine the Haskell and C versions are on par. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Vedr: To my boss: The code is cool, but it is about 100 times slower than the old one...
On Thu, Nov 29, 2012 at 1:40 PM, Johan Tibell johan.tib...@gmail.com wrote: This version works around the Word-Double conversion bug and shows good performance: I'd also like to point out that I've removed lots of bang patterns that weren't needed. This program runs fine without any bang patterns (but I've kept the one that can possibly have any performance implication at all). -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Vedr: To my boss: The code is cool, but it is about 100 times slower than the old one...
On Thu, Nov 29, 2012 at 2:02 PM, Johan Tibell johan.tib...@gmail.com wrote: On Thu, Nov 29, 2012 at 2:01 PM, Daniel Fischer daniel.is.fisc...@googlemail.com wrote: On Donnerstag, 29. November 2012, 13:40:42, Johan Tibell wrote: word2Double :: Word - Double word2Double (W# w) = D# (int2Double# (word2Int# w)) On my (64-bit) machine the Haskell and C versions are on par. Yes, but the result is very different. Doh, I guess I didn't look at the output carefully enough. One obvious error is that the C code has one loop go from 1..n where I just naively assumed all loops go from 0..n-1. This fixes that: outer i | i = VNUM = inner (fromIntegral i) 0 outer (i + 1) | otherwise = return () outer (1 :: Int) Perhaps the other issue is that word2Double (W# w) = D# (int2Double# (word2Int# w)) is possibly the wrong way and we need a word2Double#. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Can a GC delay TCP connection formation?
Kazu and Andreas, could this be IO manager related? On Monday, November 26, 2012, Jeff Shaw wrote: Hello, I've run into an issue that makes me think that when the GHC GC runs while a Snap or Warp HTTP server is serving connections, the GC prevents or delays TCP connections from forming. My application requires that TCP connections form within a few tens of milliseconds. I'm wondering if anyone else has run into this issue, and if there are some GC flags that could help. I've tried a few, such as -H and -c, and haven't found anything to help. I'm using GHC 7.4.1. Thanks, Jeff __**_ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Common function names for '(.) . (.)', '(.) . (.) . (.)' ...?
On Wed, Nov 21, 2012 at 9:48 AM, MigMit miguelim...@yandex.ru wrote: Tits? This is not appropriate for this mailing lists, please take it elsewhere. I suggest http://www.reddit.com/r/ruby ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cabal install...
On Tue, Nov 20, 2012 at 1:10 PM, Gregory Guthrie guth...@mum.edu wrote: Hmm, Now when I tried to run Leksah, I get not only some broken packages (which I can avoid for my current project), but: ** ** command line: cannot satisfy -package-id base-4.5.1.0-7c83b96f47f23db63c42a56351dcb917: base-4.5.1.0-7c83b96f47f23db63c42a56351dcb917 is unusable due to missing or recursive dependencies: integer-gmp-0.4.0.0-c15e185526893c3119f809251aac8c5b (use -v for more information) ** ** So I tried to install base, then re-install it, but both fail; Any hints? From this email and some of the previous emails it seems that your package DB is in a pretty bad state, most likely from using --force-reinstalls. When Cabal warns you that this will break stuff it actually means it. :) My suggestion is that you rm -rf ~/.ghc/x86_64-linux-7.6.1 # or equivalent on your system. Then reinstall all the packages you want by listing them all at once cabal install pkg1 pkg2 pk3 By listing them all together cabal-install tries to come up with an install plan that is globally consistent for all of them. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal failures...
On Tue, Nov 20, 2012 at 5:14 PM, Albert Y. C. Lai tre...@vex.net wrote: On 12-11-20 05:37 PM, Gregory Guthrie wrote: No; the first sentence says that someone else had reported that testing on Windows was hard to do because of (a perceived) lack of access to Windows by Haskell developers... The implication is that Haskell developers (only/mainly) use *nix. I commented that if true this lack of Windows testing could limit the availability of Haskell to the largest market share of users. Clearly, since 90% of computers have Windows, it should be trivial to find one to test on, if a programmer wants to. Surely every programmer is surrounded by Windows-using family and friends? (Perhaps to the programmer's dismay, too, because the perpetual I've got a virus again, can you help? is so annoying?) We are not talking about BeOS. Therefore, if programmers do not test on Windows, it is because they do not want to. This logic is flawed. More than 90% of computers having Windows doesn't imply that 90% of all computers in a given household runs Windows. What's the probability that your household has a Windows computer if you're a programmer that don't live with your parents? What if that programmer is an open source contributor. Surely not 90%. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal failures...
On Tue, Nov 20, 2012 at 5:34 PM, Albert Y. C. Lai tre...@vex.net wrote: This counter-argument is flawed. Why limit oneself to one's own household? (Garage? Basement?) Get out more! Visit a friend. Talk to an internet cafe owner for a special deal to run one's own programs. Rent virtual machine time in the cloud. There are many creative, flexible, low-cost possibilities. If one wants to. Clearly this different approaches have different costs. Fixing a bug from my couch or asking some stranger at a cafe if I can install msys is quite different things. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal failures...
Hi Greg, On Mon, Nov 19, 2012 at 1:25 PM, Gregory Guthrie guth...@mum.edu wrote: I follow the Cabal-messes threads with some interest, since that is the hardest area for me since starting to use Haskell. Probably 40-60% of all package install fail for some mysterious reason, with threats that trying to fix them will break more things, which generally is true. :-) We're working on it. Be brave, things are going to get better! I am not exert in the area, but I wonder how /why/ this is different than other package managers, like apt in Linux, I have never had any problems with it, and I would think that their dependencies are of at least similar complexities. The Linux package managers solve a different problems. They let you install a set of packages that have been manually curated and are know to work together (i.e. all version dependencies are fixed) while cabal does version resolution on packages that might not ever have been tried together. If you install Haskell packages via your distro's package manager I assume they will always install cleanly. The problem is that people want the latest bleeding edge of packages, which haven't made it into the distros yet, and hence they get to experience some of the pains associated with being on the bleeding edge. Being on Windows also makes things harder, as most developers don't have a Windows box to test their stuff on. In any case; Trying to do a cabal update I was told to try to update cabal-install, which I think means actually updating cabal (since I actually run installs via cabal install...), but that fails with this message below, and I don't know how to proceed. cabal-install is the package that includes the cabal executable. Cabal (with a capital C) is the library that cabal-install uses. The naming is unfortunate but hard to change at this point. To update cabal-install you do: $ cabal update cabal install cabal-install Make sure that the place that the cabal binary gets installed into (which is printed at the end of the install) is on your PATH. Linking C:\Users\guthrie\AppData\Local\Temp\Cabal-1.16.0.3-13880\Cabal-1.16.0.3\dist\setup\setup.exe ... Configuring Cabal-1.16.0.3... Warning: This package indirectly depends on multiple versions of the same package. This is highly likely to cause a compile failure. This is a sure sign that things are not going to work well. Could you include the output of cabal install -v cabal-install please. The output here is not enough to tell me what's going on. Please also include the output of cabal --version ghc --version Are you using the Haskell Platform, if so, which version? -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal failures...
On Mon, Nov 19, 2012 at 2:55 PM, Greg Fitzgerald gari...@gmail.com wrote: cabal install -v cabal-install Not sure if you're running into this one, but a configuration that wasn't working for me: 1) Install Haskell Platform 2) Install GHC 7.6.1 3) cabal install cabal-install As I recall, the error had something to do with a Cabal-generated 'Paths' file assuming the Prelude exported 'catch'. It was affecting a bunch of other packages too, which forced me to upgrade cabal-install. To get things working, I had to boot GHC 7.6 from my system PATH, upgrade cabal-install using GHC 7.4, and then put 7.6 back in the system path. After doing that, everything has worked well with GHC 7.6. The issue is that cabal-install-1.16.0.1 is broken on Windows. We have a new, fixed cabal-install-1.16.0.2 out, but if you were unlucky enough to install the broken one you need to delete that binary and install cabal-install again (either by using the bootstrap.sh script in the cabal-install repo or by some other means). ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Automation of external library installation in haskell package.
Hi, On Sun, Nov 18, 2012 at 11:51 AM, Maksymilian Owsianny maksymilian.owsia...@gmail.com wrote: However, the problem with hackage not being able to build the package, and therefore generate documentation, remains. So conversely my question would be if there is any way to get around this? In the short term I'm afraid not. The plan is to eventually let users upload the documentation (after building it on their own machines). -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to determine correct dependency versions for a library?
On Wed, Nov 14, 2012 at 1:01 PM, Tobias Müller trop...@bluewin.ch wrote: Clark Gaebel cgae...@uwaterloo.ca wrote: To prevent this, I think the PVP should specify that if dependencies get a major version bump, the package itself should bump its major version (preferably the B field). No, it has nothing to do with major/minor version bumps. It's just that if you underspecify your dependencies, they may become invalid at some point and you cannot correct them. Overspecified dependencies will always remain correct. This is required if you want to maintain the property that clients don't break. If A-1.0 dependes on B-1.0.* and C depends on both A-1.0.* and B-1.0.*. Bumping dependency in A on B to B-2.0.* without bumping the major version number of A will cause C to fail to compile as it now depends on both B-1.0.* (directly) and B-2.0.* (though A-1.0). -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to determine correct dependency versions for a library?
On Mon, Nov 12, 2012 at 1:06 AM, Erik Hesselink hessel...@gmail.com wrote: tl;dr: Breakages without upper bounds are annoying and hard to solve for package consumers. With upper bounds, and especially with sandboxes, breakage is almost non-existent. I don't see how things break with upper bounds, at least in the presence of sandboxes. If all packages involved follow the PVP, a build that worked once, will always work. Cabal 0.10 and older had problems here, but 0.14 and later will always find a solution to the dependencies if there is one (if you set max-backjumps high enough). The breakage people are talking about with regards to upper bounds is that every time a new version of a dependency comes out, packages with upper bounds can't compile with it, even if they would without the upper bound. For example, the version number of base is bumped with almost every GHC release, yet almost no packages would actually break to the changes that caused that major version number to go up. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Taking over ghc-core
On Saturday, November 10, 2012, Shachaf Ben-Kiki wrote: With Don Stewart's blessing (https://twitter.com/donsbot/status/267060717843279872), I'll be taking over maintainership of ghc-core, which hasn't been updated since 2010. I'll release a version with support for GHC 7.6 later today. Yay! ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Motion to unify all the string data types
On Fri, Nov 9, 2012 at 10:22 PM, Roman Cheplyaka r...@ro-che.info wrote: * Johan Tibell johan.tib...@gmail.com [2012-11-09 19:00:04-0800] As a community we should primary use strict ByteStrings and Texts. There are uses for the lazy variants (i.e. they are sometimes more efficient), but in general the strict versions should be preferred. I'm fairly surprised by this advice. I think that lazy BS/Text are a much safer default. If there's not much text it wouldn't matter anyway, but for large amounts using strict BS/Text would disable incremental producing/consuming (except when you're using some kind of an iteratee library). Can you explain your reasoning? It better communicates intent. A e.g. lazy byte string can be used for two separate things: * to model a stream of bytes, or * to avoid costs due to concatenating strings. By using a strict byte string you make it clear that you're not trying to do the former (at some potential cost due to the latter). When you want to do the former it should be clear to the consumer that he/she better consume the string in an incremental manner as to preserve laziness and avoid space leaks (by forcing the whole string). -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Motion to unify all the string data types
Hi Andrew, On Fri, Nov 9, 2012 at 6:15 PM, Andrew Pennebaker andrew.penneba...@gmail.com wrote: Frequently when I'm coding in Haskell, the crux of my problem is converting between all the stupid string formats. You've got String, ByteString, Lazy ByteString, Text, [Word], and on and on... I have to constantly lookup how to convert between them, and the overloaded strings GHC directive doesn't work, and sometimes ByteString.unpack doesn't work, because it expects [Word8], not [Char]. AAAH!!! Haskell is a wonderful playground for experimentation. I've started to notice that many Hackage libraries are simply instances of typeclasses designed a while ago, and their underlying implementations are free to play around with various optimizations... But they ideally all expose the same interface through typeclasses. Can we do the same with String? Can we pick a good compromise of lazy vs strict, flexible vs fast, and all use the same data structure? My vote is for type String = [Char], but I'm willing to switch to another data structure, just as long as it's consistently used. tl;dr; Use strict Text and ByteStrings. We need at least two string types, one for byte strings and one for Unicode strings, as these are two semantically different concepts. You see that most modern languages use two types (e.g. str and unicode in Python). For Unicode strings, String is not a good candidate; it's slow, uses a lot of memory, doesn't hide its representation [1], and finally, it encourages people to do the wrong thing from a Unicode perspective [2]. As a community we should primary use strict ByteStrings and Texts. There are uses for the lazy variants (i.e. they are sometimes more efficient), but in general the strict versions should be preferred. Choosing to use these two types can sometimes be a bit frustrating, as lots of code (e.g. the base package) uses Strings. But if we don't start using them the pain will never end. One of the main pain points is that the I/O layer using Strings, which is both inconvenient and wrong (e.g. a socket returns bytes, not Unicode code points, yet the recv function returns a String). We really need to create a more sane I/O layer. If you use ByteString and Text, you shouldn't see calls to pack/unpack in your code (except if you want to interact with legacy code), as the correct way to go between the two is via the encode and decode functions in the text package. As for type classes, I don't think we use them enough. Perhaps because Haskell wasn't developed as an engineering language, some good software engineering principles (code against an interface, not a concrete implementation) aren't used in out base libraries. One specific example is the lack of a sequence abstraction/type class, that all the string, list, and vector types could implement. Right now all these types try to implement a compatible interface (i.e. the traditional list interface), without a language mechanism to express that this is what they do. 1. If String was designed as an abstract type, we could simply has switched its implementation for a more efficient implementation and we would have to create a new Text type. 2. By having the primary interface of a Unicode data type be a sequence, we encourage users to work on strings element-wise, which can lead to errors as Unicode code points don't correspond well to the human concept of a character (for example, the Swedish ä character can be represented using either one or two code points). A sequence view is sometimes useful, if you're implementing more high-level transformations, but often you should use functions that operate on the whole string, such as toUpper :: Text - Text. Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Does anyone know where George Pollard is?
On Wed, Nov 7, 2012 at 9:03 PM, Myles C. Maxfield myles.maxfi...@gmail.comwrote: Does anyone know where he is? If not, is there an accepted practice to resolve this situation? Should I upload my own 'idna2' package. Generally we try to contact the maintainer (and give him/her enough time to reply, in case he/she is e.g. on vacation). After that someone can announce their intention to take over maintenance of the package. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Defining a Strict language pragma
On Mon, Nov 5, 2012 at 3:28 PM, Iustin Pop iu...@k1024.org wrote: Did you mean here it's still possible to define _lazy_ arguments? The duality of !/~ makes sense, indeed. Yes, it would be nice to still make arguments explicitly lazy, using ~. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [ANNOUNCE] hashable-generics
On Sun, Nov 4, 2012 at 8:35 AM, Clark Gaebel cgae...@uwaterloo.ca wrote: @dag: I would love for this to be merged into Data.Hashable, and I think it would make a lot of people's lives easier, and prevent them from writing bad hash functions accidentally. Couldn't we do it using GHC's default implementations based on signatures features, so we don't have to expose any new things in the API? We used that in unordered-containers like so: #ifdef GENERICS default parseRecord :: (Generic a, GFromRecord (Rep a)) = Record - Parser a parseRecord r = to $ gparseRecord r #endif -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: Cabal-1.16.0.2 and cabal-install-1.16.0.1
Hi, On behalf of the cabal contributors, I'm proud to announce bugfix releases of Cabal and cabal-install. Here's a complete list of changes since the last release: Since Cabal-1.16.0.1: * Bump Cabal version number to 1.16.0.2 * Fixed warnings on the generated Paths module. The warnings are generated by the flag '-fwarn-missing-import-lists'. Since cabal-install-1.16.0: * Have bootstrap.sh use Cabal-1.16.0.2 * Fix installing from custom folder on Linux (#1058) * Change bootstrap.sh to require Cabal = 1.16 1.18 * Bump cabal-install version number to 1.16.0.1 * Bump network dependency in bootstrap.sh to 2.3.1.1 * Fix compilation error * Disable setting the jobs: $nprocs line in default ~/.cabal config * Fix building cabal-install with ghc-6.12 and older I expect this to be the last release from the 1.16 branch. I intend to make another in about 3 months, or a bit sooner if the sandboxing work is ready earlier. Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] cabal-install-1.16.0.2
Hi all, I've created bug fix release candidates for Cabal and cabal-install to address the bugs found after the release. If everyone could take some time to try them out, especially those who had issues with the previous releases. To install the release candidates run: cabal install http://johantibell.com/files/Cabal-1.16.0.2.tar.gz \ http://johantibell.com/files/cabal-install-1.16.0.1.tar.gz Unless there are any issues, we'll make a release in the next few days. Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cabal-install-1.16.0.2
On Mon, Oct 15, 2012 at 7:16 PM, Johan Tibell johan.tib...@gmail.com wrote: Hi all, I've created bug fix release candidates for Cabal and cabal-install to address the bugs found after the release. Here's the list of fixed bugs: Fixed since cabal-install-1.16.0: * Fix installing from custom folder on Linux (#1058) * Change bootstrap.sh to require Cabal = 1.16 1.18 * Bump cabal-install version number to 1.16.0.1 * Bump network dependency in bootstrap.sh to 2.3.1.1 * Fix compilation error * Disable setting the jobs: $nprocs line in default ~/.cabal config * Fix building cabal-install with ghc-6.12 and older Fixed since Cabal-1.16.0.1: * Bump Cabal version number to 1.16.0.2 * Fixed warnings on the generated Paths module. The warnings are generated by the flag '-fwarn-missing-import-lists'. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] problem with cabal-install-1.16.0 and Cabal-1.16.0.1 on Haskell Platform for OS X
Hi, Thanks for the detailed problem description. I've CCed Mark Lentczner, who designed the directory layout for the Haskell Platform on OS X. Hopefully he'll be able to tell you what to do. On Tue, Oct 9, 2012 at 8:31 AM, Derrell Piper d...@electric-loft.org wrote: Hi, New Haskell user here. I have the latest Haskell Platform 2012.2.0.0 for Mac OS X, 64 bit installed. I have not installed any additional packages. Believe me, the Prelude hurts enough. Anyways, seeing the big update to cabal, I decided to install it and did a: cabal install cabal-install-1.16.0 Cabal-1.16.0.1 ...per the announcement. The installation seemingly went well (log attached) but I'm left with a symlink farm that's still pointing at the old version. NB: I did the installation in my own account (ddp), not via sudo which looks like what was probably expected given the ownerships I do see. Here's what I'm left with: % which cabal /usr/bin/cabal % ls -la /usr/bin/cabal lrwxr-xr-x 1 root wheel - 26 Oct 8 16:42 /usr/bin/cabal - /Library/Haskell/bin/cabal % ls -la /Library/Haskell/bin/cabal* lrwxr-xr-x 1 ddp wheel - 10 Oct 8 16:42 /Library/Haskell/bin/cabal - cabal.wrap lrwxr-xr-x 1 ddp wheel - 37 Oct 8 16:42 /Library/Haskell/bin/cabal.real - ../lib/cabal-install-0.14.0/bin/cabal -rwxr-xr-x 1 root admin - 4197 May 27 12:37 /Library/Haskell/bin/cabal.wrap I find that it actually installed itself here: % ls -la ~/Library/Haskell/bin/cabal* lrwxr-xr-x 1 ddp staff - 47 Oct 8 20:45 /Users/ddp/Library/Haskell/bin/cabal - ../ghc-7.4.1/lib/cabal-install-1.16.0/bin/cabal So I guess my questions are: 1) did I miss some step along the way? 2) what's the correct way to fix this? Just change the cabal.real symlink to point to where it actually is? 3) which, if any, directories should I have in my PATH in general: /Library/Haskell/bin, ~/Library/Haskell/bin, neither or both? Thanks, Derrell Installation log: fluffy512% cabal --version cabal-install version 0.14.0 using version 1.14.0 of the Cabal library fluffy513% cabal update Downloading the latest package list from hackage.haskell.org Note: there is a new version of cabal-install available. To upgrade, run: cabal install cabal-install fluffy514% cabal install cabal-install-1.16.0 Cabal-1.16.0.1 Resolving dependencies... [ 1 of 65] Compiling Distribution.Compat.Exception ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/Compat/Exception.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Distribution/Compat/Exception.o ) [ 2 of 65] Compiling Distribution.Compat.TempFile ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/Compat/TempFile.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Distribution/Compat/TempFile.o ) [ 3 of 65] Compiling Distribution.Compat.CopyFile ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/Compat/CopyFile.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Distribution/Compat/CopyFile.o ) [ 4 of 65] Compiling Distribution.GetOpt ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/GetOpt.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Distribution/GetOpt.o ) [ 5 of 65] Compiling Distribution.Compat.ReadP ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/Compat/ReadP.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Distribution/Compat/ReadP.o ) [ 6 of 65] Compiling Distribution.Text ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/Text.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Distribution/Text.o ) [ 7 of 65] Compiling Distribution.Version ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/Version.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Distribution/Version.o ) [ 8 of 65] Compiling Language.Haskell.Extension ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Language/Haskell/Extension.hs, /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/dist/setup/Language/Haskell/Extension.o ) [ 9 of 65] Compiling Distribution.TestSuite ( /var/folders/sm/f2yb5sp11490bcy5k6t46v20gn/T/Cabal-1.16.0.1-48304/Cabal-1.16.0.1/Distribution/TestSuite.hs,
Re: [Haskell-cafe] 64-bit vs 32-bit haskell platform on Mac: misleading notice on Platform website?
On Mon, Oct 8, 2012 at 3:28 AM, Christiaan Baaij christiaan.ba...@gmail.com wrote: Hi, I finally found another OS X mountain lion install and can confirm the behaviour I described earlier: 32-bit: compiled code works, interpreted code works 64-bit: compiled code works, interpreted code fails Here's the test case: - cabal install gloss --flags-GLUT GLFW - cabal unpack gloss-examples - cd gloss-examples-1.7.6.2/picture/GameEvent - ghci -fno-ghci-sandbox Main.hs - main I get the following crash report: http://pastebin.com/jZjfFtm7 The weird thing is the following: When I run 'ghci' from inside 'gdb' (to find the origin for the segfault), everything works fine: ghci: segfault ghci from gdb: everything works I have no idea what's going on, so if anyone has any pointers on how to make sure ghci behaves the same in gdb please let me know. Could you please file a bug report at: http://hackage.haskell.org/trac/ghc/ Thanks! ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)
Hi, I'll make a bugfix release for cabal-install and Cabal in a few days to include fixes to issues people found so far. If everyone who had some problem related to the latest release could please post it here so I can make sure that we include a fix for them. If you've already reported it elsewhere, please bring it up here anyway to make sure I don't miss it. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Panic loading network on windows (GHC 7.6.1)
On Fri, Oct 5, 2012 at 5:31 PM, JP Moresmau jpmores...@gmail.com wrote: Hello, I've installed Cabal and cabal-install 1.16 (which required network) on a new GHC 7.6.1 install and everything went well, except now when building a package requiring network I get: Loading package network-2.4.0.1 ... ghc.exe: Unknown PEi386 section name `.idata $4' (while processing: c:/ghc/ghc-7.6.1/mingw/lib\libws2_32.a) ghc.exe: panic! (the 'impossible' happened) (GHC version 7.6.1 for i386-unknown-mingw32): loadArchive c:/ghc/ghc-7.6.1/mingw/lib\\libws2_32.a: failed I have done something wrong while building network on my Windows XP machine? What can I check? I'm not quite sure what's going on. We did test cabal-install (and thus network) on a Windows machine before the release. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)
On Thu, Oct 4, 2012 at 7:11 PM, Tim Docker t...@dockerz.net wrote: Does this new release included the sandbox functions discussed in this blog post: http://blog.johantibell.com/2012/08/you-can-soon-play-in-cabal-sandbox.html ? It doesn't. The sandbox feature requires a little UI work still. I expect to have it out by the end of the year. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)
On the behalf of the many contributors to cabal, I'm proud to present cabal-install-1.16.0. This release contains almost a year worth of patches. Highlights include: * Parallel installs (cabal install -j) * Several improvements to the dependency solver. * Lots of bugfixes We're also simultaneously releasing Cabal-1.16.0.1, which address a few bugs. To install: cabal update cabal install cabal-install-1.16.0 Cabal-1.16.0.1 Complete list of changes in cabal-install-1.16.0: * Bump cabal-install version number to 1.16.0 * Extend the unpack command for the .cabal file updating * On install, update the .cabal file with the one from the index * Make compatible with `network-2.4` API * Update ZLIB_VER in bootstrap.sh for ghc-7.6 compatibility * cabal-install.cabal: add Distribution.Client.JobControl and Distribution.Compat.Time * Adapt bootstrap.sh to ghc-7.6 changes * Move comment that was missed in a refactoring * cabal-install: Adapt for GHC 7.6 * Ensure that the cabal-install logfile gets closed * Make cabal-install build with Cabal-1.16. * Initialise the 'jobs' config file setting with the current number of CPU cores. * Update version bounds for directory. * Update bootstrap.sh to match platform 2012.2.0.0 * Relax dependency on containers. * Bump versions. * Better output for parallel install. * Fix warnings. * Remove 'tryCachedSetupExecutable'. * Redundant import. * Use a lock instead of 'JobControl 1'. * Comments, cosmetic changes. * Implement the setup executable cache. * Add the missing JobControl module * Fix missing import after merge of par build patches * Fix impl of PackageIndex.allPackagesByName * Drop the ghc-options: -rtsopts on cabal-install. We do not need it. * Parallelise the install command This is based on Mikhail Glushenkov's patches. * InstallPlan: Add a Processing package state. * Add a '-j' flag for the 'install' command. * Add -threaded and -rtsopts to cabal-install's ghc-options. * Fix typos. * Fix warnings. * 80-col violation. * Spelling. * Fix warnings. * Extended a comment. * Force the log for the error to be printed in parallel with the complete trace. * Remove goal choice nodes after reordering is completed. * Make modular solver handle manual flags properly. * Store manual flag info in search tree. * Maintain info about manual flags in modular solver. * Fix cabal-install build. * Merge pull request #6 from pcapriotti/master * Adapt to change in GHC package db flags. * Merge pull request #1 from iustin/master * Add support for Apache 2.0 license to cabal-install * Handle test and bench stanzas without dependencies properly in modular solver. * Updated repo location in cabal files. * last-minute README changes * updated copyright year for Duncan * updated changelog * added deepseq to bootstrap.sh * handling the solver options properly in config file * handling the optimization option properly in config file * Update cabal-install bootstrap.sh * treat packages that are unknown no longer as an internal error in modular solver * minor wording change when printing install plans * no longer pre-filter broken packages for modular solver * for empty install plans, print the packages that are installed * make the reinstall check less noisy * disable line-wrapping for solver debug output * adding a solver flag for shadowing of installed packages * adding the possibility for index-disabled packages * choose default solver based on compiler version * Added a comment * Use the new --package-db flag stuff in cabal-install * head cabal-install requires head Cabal * Fix ticket #731 * Add brief description of PVP to cabal init generated .cabal files * Bump versions to 1.15 and 0.15 This is the head branch, the 1.14.x and 0.14.x are in the 1.14 branch. * init: guess at filling in deps in the build-depends: field * init: see whether source directory 'src' exists. * init: improve prompt: enclose y/n in parens * init: improve prompt: 'homepage' field is not for repos. * bootstrap with --global should still respect $PREFIX * Update cabal-install boostrap.sh package versions * Fix 'cabal configure --enable-{tests,benchmarks}'. 'cabal configure' was not adding optional stanza constraints when checking dependencies, causing '--enable-{tests,benchmarks}' to be silently ignored. * added missing error message * Don't try to run test suites where none exist. * Fixed non-exhaustive pattern matches with new InstallOutcome. * Automatically run test suites when invoked with 'cabal install --enable-tests'. Do not install if tests fail. * make test and bench available as user constraints * let --reinstall imply --force-reinstalls for targets * stanza support in modular solver * show optional stanzas when printing install plans * Added a missing case. * Enable tests and benchmarks in cabal-install without modifications to the Cabal library. * Don't build benchmarks, even if installing benchmark dependencies. * Update types in modular dependency solver to compile with new test/benchmark dependency
Re: [Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)
On Wed, Oct 3, 2012 at 6:21 PM, José Lopes jose.lo...@ist.utl.pt wrote: Hello, I just tried to upgrade cabal-install using an older version and it yields the following error: Distribution/Client/JobControl.hs:63:6: Not in scope: `mask' We tested on GHC 7.0.4, 7.4.1, and 7.6.1. If you need any machine/software specs let me know. That would be great. Which version of GHC/base do you have? P.S. I'm not sure if we really want to support more than the 3 last GHC releases. People on really old GHCs can always use a older cabal-install. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] total Data.Map.! function
Hi, On Wed, Oct 3, 2012 at 7:52 PM, Henning Thielemann lemm...@henning-thielemann.de wrote: I wondered whether there is a brilliant typing technique that makes Data.Map.! a total function. That is, is it possible to give (!) a type, such that m!k expects a proof that the key k is actually present in the dictionary m? How can I provide the proof that k is in m? Same question for 'lab' (import Data.Graph.Inductive(lab)). That is, can there be a totalLab, with (totalLab gr = fromJust . lab gr) that expects a proof that the node Id is actually contained in a graph? Perhaps by using a HList in the type of the Map? -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)
On Wed, Oct 3, 2012 at 9:15 PM, Austin Seipp mad@gmail.com wrote: Just a heads up: on Ubuntu 12.04 with GHC 7.4.1 out of apt (no haskell-platform,) using the bootstrap.sh script fails, because the constraints for CABAL_VER_REGEXP are too lax: $ sh ./bootstrap.sh Checking installed packages for ghc-7.4.1... Cabal is already installed and the version is ok. transformers is already installed and the version is ok. mtl is already installed and the version is ok. deepseq is already installed and the version is ok. text is already installed and the version is ok. parsec is already installed and the version is ok. network is already installed and the version is ok. time is already installed and the version is ok. HTTP is already installed and the version is ok. zlib is already installed and the version is ok. random is already installed and the version is ok. cleaning... Linking Setup ... Configuring cabal-install-1.16.0... Setup: At least the following dependencies are missing: Cabal =1.16.0 1.18 Error during cabal-install bootstrap: Configuring the cabal-install package failed $ ghc-pkg list Cabal /var/lib/ghc/package.conf.d Cabal-1.14.0 /home/a/.ghc/x86_64-linux-7.4.1/package.conf.d $ In bootstrap.sh, we see: CABAL_VER=1.16.0;CABAL_VER_REGEXP=1\.(13\.3|1[4-7]\.) # = 1.13.3 1.18 The constraint should be updated so that it requires 1.16. It can be fixed by saying: CABAL_VER=1.16.0;CABAL_VER_REGEXP=1\.(1[6-7]\.) # = 1.16 1.18 Otherwise, you can't get any new cabal install without the platform. Or you have to get 1.14, then install 1.16 via 'cabal install'. I'll file a bug later today. Thanks. Please make the fix against the cabal-1.16 branch on GitHub and send me a pull request. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: cabal-install-1.16.0 (and Cabal-1.16.0.1)
On Wed, Oct 3, 2012 at 5:42 PM, Ivan Lazar Miljenovic ivan.miljeno...@gmail.com wrote: On Oct 4, 2012 2:08 AM, Johan Tibell johan.tib...@gmail.com wrote: On the behalf of the many contributors to cabal, I'm proud to present cabal-install-1.16.0. Why the sudden change in versioning scheme (from 0.x to 1.x)? To remove the confusion where in the cabal-install executable and its corresponding Cabal library had different versioning schemes. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] 64-bit vs 32-bit haskell platform on Mac: misleading notice on Platform website?
Hi, On Wed, Sep 26, 2012 at 7:44 AM, Carter Schonwald carter.schonw...@gmail.com wrote: To the best of my knowledge there is absolutely no reason to use the 32bit haskell on OS X (aside from memory usage optimization cases which likely do not matter to the *typical* user), and the community should probably update the recommendation to reflect this. The source of the recommendation are the benchmark results presented here: http://mtnviewmark.wordpress.com/2011/12/07/32-bits-less-is-more/ Note that it's very common to run other GC:ed languages, such as Java and Python, in 32-bit mode whenever possible. 32-bit almost halves the memory footprint and thus shortens GC pauses, in particular major ones, which are O(n) in the size of the heap. The problem of missing 32-bit C libraries might be a good reason for us to recommend 64-bit though and leave the 32-bit recommendation to people who know what they are doing. Cheers, Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] 64-bit vs 32-bit haskell platform on Mac: misleading notice on Platform website?
Adding Mark who's the release manager for the platform (and also the maintainer of the OS X builds). On Wed, Sep 26, 2012 at 11:57 AM, Erik Hesselink hessel...@gmail.com wrote: On Wed, Sep 26, 2012 at 10:58 AM, Johan Tibell johan.tib...@gmail.com wrote: On Wed, Sep 26, 2012 at 7:44 AM, Carter Schonwald carter.schonw...@gmail.com wrote: To the best of my knowledge there is absolutely no reason to use the 32bit haskell on OS X (aside from memory usage optimization cases which likely do not matter to the *typical* user), and the community should probably update the recommendation to reflect this. The source of the recommendation are the benchmark results presented here: http://mtnviewmark.wordpress.com/2011/12/07/32-bits-less-is-more/ The problem of missing 32-bit C libraries might be a good reason for us to recommend 64-bit though and leave the 32-bit recommendation to people who know what they are doing. We switched to a 64bit GHC recently for this exact reason. The 64bit libraries are either already installed, or can easily be installed through e.g. brew. For 32bit libraries, we sometimes had to compile from source, passing all kinds of flags. The downside for us is doubling the memory usage, but that's more easily solved (with more memory). I haven't noticed the performance reduction. So in short, I think the 64bit version should be the default recommendation on OS X. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to implement nested loops with tail recursion?
On Wed, Sep 19, 2012 at 7:24 PM, sdiy...@sjtu.edu.cn wrote: main = do let f 0 acc = return acc f n acc = do v - return 1 f (n-1) (v+acc) f 100 100 = print Try this main = do let f :: Int - Int - IO Int f 0 !acc = return acc -- note strict accumulator f n acc = do v - return 1 f (n-1) (v+acc) f 100 100 = print ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to implement nested loops with tail recursion?
On Wed, Sep 19, 2012 at 8:00 PM, sdiy...@sjtu.edu.cn wrote: So how do I force IO actions whose results are discarded (including IO ()) to be strict? In your particular case it looks like you want Data.IORef.modifyIORef'. If your version of GHC doesn't include it you can write it like so: -- |Strict version of 'modifyIORef' modifyIORef' :: IORef a - (a - a) - IO () modifyIORef' ref f = do x - readIORef ref let x' = f x x' `seq` writeIORef ref x' -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Destructive updates to plain ADTs
On Sun, Sep 9, 2012 at 1:46 AM, Milan Straka f...@ucw.cz wrote: Hi all, is there any way to perform a destructive update on a plain ADT? Imagine I have a simple data Tree a = Nil | Node a (Tree a) (Tree a) I would like to be able to modify right subtree of an existing tree. I can do that for example when using IORefs by changing the datatype to data Tree a = Nil | Node a (IORef (Tree a)) (IORef (Tree a)) and use unsafePerformIO + writeIORef. But the IORefs cause additional complexity when working with the data type. At the moment I am interested in any GHC solution, be it non-portable or version specific. I would like just to run some benchmarks and see the results. Cheers, Milan You can do it if you refer to the children using Array#s. That's what I do in unordered-containers to implement a more efficient fromList. For arbitrary field types I don't think there's a way (although it would be useful when birthing persistent data structures). -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Destructive updates to plain ADTs
On Sun, Sep 9, 2012 at 2:19 AM, MigMit miguelim...@yandex.ru wrote: Why modify it instead of creating the new one and let the previous tree get garbage collected? You can avoid a bunch of copying and allocation by modifying the nodes in-place. See http://blog.johantibell.com/2012/03/improvements-to-hashmap-and-hashset.html for some numbers. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] performance issues with popCount
On Fri, Sep 7, 2012 at 4:54 AM, Nicolas Trangez nico...@incubaid.com wrote: On Thu, 2012-09-06 at 12:07 -0700, Johan Tibell wrote: Have a look at the popCount implementation for e.g. Int, which are written in C and called through the FFI: https://github.com/ghc/packages-ghc-prim/blob/master/cbits/popcnt.c Out of interest: isn't this compiled into the popCnt# primop (and popcnt instruction on SSE4.2)? It's the other way around the popCnt# primop is compiled into either calls to these C functions or into the popcnt instruction, if -msse4.2 is given. I recently noticed Data.IntSet also contains a fairly basic bitcount implementation [1]. Is this kept as-is for a reason, instead of using popCount from Data.Bits? I don't think so, except that we want to support the last 3 released versions of GHC so we need to have a fallback if Data.Bits.popCount isn't defined. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] performance issues with popCount
Hi Harald, On Thu, Sep 6, 2012 at 9:46 AM, Harald Bögeholz b...@ct.de wrote: Anyway, I tried this version popCount :: Integer - Int popCount = go 0 where go c 0 = c go c w = go (c+1) (w .. (w - 1)) and profiling showed that my program spent 80 % of its time counting bits. This is very much a placeholder version. I didn't spend any time optimizing the Integer implementation (the implementations for fixed sized type are quite optimal however). So I thought I'm clever and implement a table-based version like this: popCount' :: Integer - Int popCount' = go 0 where go c 0 = c go c w = go (c+1) (w .. (w - 1)) popCountN = 10 popCountMask :: Integer popCountMask = shift 1 popCountN - 1 popCountTable :: Array Integer Int popCountTable = listArray (0, popCountMask) $ map popCount' [0 .. popCountMask] popCount :: Integer - Int popCount 0 = 0 popCount x = popCountTable ! (x .. popCountMask) + popCount (x `shiftR` popCountN) Have a look at the popCount implementation for e.g. Int, which are written in C and called through the FFI: https://github.com/ghc/packages-ghc-prim/blob/master/cbits/popcnt.c Perhaps you could create a binding to the GMP mpz_popcount function, as Integer is implemented using GMP already? It would make a nice patch to the Data.Bits module. Note that you'd still need a fallback for those that use integer-simple instead of integer-gmp. If you don't want to do that you can take this function: uint8 popcnt8(uint8 x) { return popcount_tab[(unsigned char)x]; } and call it repeatedly (via the FFI) for each byte in your Integer. (Use the popcount_tab I linked to above.) -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: persistent-vector-0.1.0.1
On Wed, Aug 29, 2012 at 10:13 AM, Alberto G. Corona agocor...@gmail.com wrote: Where the persistent part of the name comes from?. It can be serialized/deserialized from a persistent storage automatically or on demand? Persistent have two meanings unfortunately. In functional programming it's used to mean that every operation on a data structure preserves the old version as well. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal Test-suites + custom preprocessors
On Sun, Aug 26, 2012 at 12:40 PM, Iain Nicol i...@thenicols.net wrote: Hi café, I am successfully using a custom preprocessor to build my main executable. In particular, I am using UUAGC to preprocess .ag files into .hs files. My Setup.hs file uses 'uuagcLibUserHook' from the package uuagc-cabal to do this, which in particular overrides Cabal's 'buildHook' for me. Now I am trying to add a test-suite (of type exitcode-stdio-1.0). I want the test suite to be able to import the modules which implement the main program. Unfortunately, building the test suite fails, because the .ag files don't seem to be built into .hs files. Does anybody know if Cabal actually supports using custom preprocessors for building a test-suite? If so, is 'buildHook' still the right hook to override? Try overriding the test hook. If it doesn't work file a bug at https://github.com/haskell/cabal/issues -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Cabal Test-suites + custom preprocessors
On Sun, Aug 26, 2012 at 1:03 PM, Iain Nicol i...@thenicols.net wrote: Johan Tibell wrote: On Sun, Aug 26, 2012 at 12:40 PM, Iain Nicol i...@thenicols.net wrote: Does anybody know if Cabal actually supports using custom preprocessors for building a test-suite? If so, is 'buildHook' still the right hook to override? Try overriding the test hook. Bleh; that's too obvious. I will give that a shot. Well, I think's it's reasonable for test to imply build so in my mind the build hook should also be executed when building test suites. ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends
On Wed, Aug 15, 2012 at 12:38 PM, Bryan O'Sullivan b...@serpentine.com wrote: I propose that the sense of the recommendation around upper bounds in the PVP be reversed: upper bounds should be specified only when there is a known problem with a new version of a depended-upon package. This argument precisely captures my feelings on this subject. I will be removing upper bounds next time I make releases of my packages. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Platform Versioning Policy: upper bounds are not our friends
On Wed, Aug 15, 2012 at 1:02 PM, Brandon Allbery allber...@gmail.com wrote: So we are certain that the rounds of failures that led to their being *added* will never happen again? It would be useful to have some examples of these. I'm not sure we had any when we wrote the policy (but Duncan would know more), but rather reasoned our way to the current policy by saying that things can theoretically break if we don't have upper bounds, therefore we need them. -- Johan ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe