Re: [Haskell-cafe] newtype a Constraint?
On 12 March 2013 13:18, Roman Cheplyaka r...@ro-che.info wrote: Is there a way to newtype a constraint? Imagine a type class parameterised over constraints. What do I do if I want multiple instances for (essentially) the same constraint? It would make sense to add support for this to newtype directly. I think it would also make sense to allow newtypes over types of kind #. All that is required is some implementation effort: I looked into doing this as part of the constraint kinds patches but it is a bit messy. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] cabal install choosing an older version
On 25 January 2013 14:46, Ozgur Akgun ozgurak...@gmail.com wrote: The latest versions of ansi-terminal and hspec do not work together. Cabal picks the latest ansi-terminal (0.6) first, then the latest hspec that doesn't conflict with this choice is 0.3.0. If this happens because the dependency bounds of ansi-terminal are too tight then please send me a patch. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage feature request: E-mail author when a package breaks
On 2 November 2011 01:08, Diego Souza dso...@bitforest.org wrote: The idea is simple: there are many different platforms that would be to expensive for one to support. So they ask the community for help, and then distribute the load amongst the perl community. Duncan and co have been working towards something similar for a while in the form of build reports. Cabal can submit a report of whether a build failed/succeeded and can opt to send a full log as well. The new Hackage Server is capable of collecting these reports and taking action based on them. For example, you could setup Hackage server to detect patterns in these build reports such as 80% of builds with GHC 7 are OK, but 100% of builds with GHC 7.2 fail. So far the infrastructure is there for creating and collecting reports. I'm not sure whether reporting is turned on by *default* in Cabal at the moment, which is something we might want to do. One thing that I'm certain of is that there are no analyses that try to find interesting patterns in the reports. If anyone is interested in the issue that might be a good place to contribute and move things forward. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Problem with TemplateHaskell
On 2 November 2011 07:42, Magicloud Magiclouds magicloud.magiclo...@gmail.com wrote: How to avoid the name changing? Maybe you should use nameBase rather than show? Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage feature request: E-mail author when a package breaks
On 1 November 2011 09:00, Ketil Malde ke...@malde.org wrote: This is where it stranded the last time, IIRC. That sentiment makes me a bit uneasy; so you are the official maintainer of a package on Hackage, but you do not want to hear about it when it fails to compile? Don't forget that some packages fail to compile on Hackage even though they work fine, because e.g. they depend on a third-party C library that is not installed, or depend on some other package that Hackage cannot build. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Hackage feature request: E-mail author when a package breaks
On 1 November 2011 10:14, Ketil Malde ke...@malde.org wrote: So, I'd *love* to get an email when my packages fail to build, but I will accept that other people have a more sensitive relationship with their inbox. (I assume that the people who raise this objection - Max and Yitzchak - belong in this category? Don't get me wrong, I personally would like such a notification service. I'm just making a case for making it somehow opt-out, perhaps at a per-package granularity. I already use the packdeps service to find out when I should relax my package's version bounds. Packdeps delivers this information to me via RSS, which I think this is a great solution - I don't feel under any pressure to read it but the information is there if I want to have a look. So ideally what I would like from Hackage 2.0 is a RSS feed that includes build failure messages, packdeps-like information and perhaps other stuff -- notification of automated testsuite failures, milestones reached (Your edit-distance package has been downloaded 1000 times! Congratulations! :-)) and new reviews/comments on my packages (if Hackage ever gets that feature). Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] is Haskell missing a non-instantiating polymorphic case?
On 23 October 2011 06:48, Adam Megacz meg...@cs.berkeley.edu wrote: The title might be a bit more provocative than necessary. I'm starting to suspect that there are very useful aspects of the parametricity of System F(C) which can't be taken advantage of by Haskell in its current state. To put it briefly, case-matching on a value of type (forall n . T n) forces one to instantiate the n, even though the branch taken within the case cannot depend on n (parametricity). I wonder if you can write eta-expansion for a product type containing an existential given your extension? If I have: data Id a = Id a I can currently write: eta x = Id (case x of Id x - x) And I have that eta x != _|_ for all x. But if I have: data Exists = forall a. Exists a Then I can't write the equivalent eta-expansion: eta x = Exists (case x of Exists x - x) The closest I can get is: eta x = Exists (case x of Exists x - unsafeCoerce x :: ()) I'm not sure if you can do this with your extension but it smells plausible. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] About the ConstraintKinds extension
On 18 October 2011 02:17, bob zhang bobzhang1...@gmail.com wrote: But I found a problem which I thought would be made better, plz correct me if I am wrong For those who only subscribe to Haskell-Cafe, Bob posted a very similar thread to ghc-users, which I replied to here with a suggestion for how we could relax the superclass-cycle check: http://thread.gmane.org/gmane.comp.lang.haskell.glasgow.user/20828/focus=20829 Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] SPECIALIZE in the presence of constraints
On 26 September 2011 01:37, Nicu Ionita nicu.ion...@acons.at wrote: 1. how can the compiler (here ghc) know which function to expose as the correct generic search function? There must be two search functions generated, one generic and its specialization. Yes, exactly. If you have: {-# SPECIALISE f :: Int - Int #-} f :: Num a = a - a f = ... Then GHC basically generates: f_spec :: Int - Int f_spec = f f :: Num a = a - a f = ... {-# RULES f_spec f = f_spec #-} Does the module export both and later the linker chooses the correct one, when the client module is known? The RULES mechanism chooses the correct one: when f applied to the specialised type arguments is seen, the generated RULE rewrites it to f_spec. 2. how can I know that the specialization is really used? When I have constraints, will the specializations be generated in the assumption that the constraint is matched? When will be this match checked? The specialisation will be used if GHC can see that the generic function is applied to the correct type arguments. So for example a call to f inside another polymorphic function g (say g contains a use of f like: g = ... (f :: Int - Int) ...) won't get specialised unless g it itself specialised or inlined. My problem is that, with or without specializations, the performance of the code is the same - so I doubt the specializations are used. GHC tells you which RULEs fired if you use -ddump-simpl or ghc-core. That might help you diagnose it, since if you see a rule fired for the specialisation then your code is probably using it. Failing that, inspecting the core output itself is always useful. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Smarter do notation
On 5 September 2011 02:38, Sebastian Fischer fisc...@nii.ac.jp wrote: These are important questions. I think there is a trade-off between supporting many cases and having a simple desugaring. We should find a sweet-spot where the desugaring is reasonably simple and covers most idiomatic cases. I have proposed a desugaring (in executable form) at https://gist.github.com/1194308. My desugaring aims for a slightly different design that does not try to detect return and instead treats the use of *, * and liftA2 purely as an optimisation - so any computation using do still generates a Monad constraint, but it may be desugared in a more efficient way than it is currently by using the Applicative combinators. (If you do want to support the type checker only generating requests for an Applicative constraint you could just insist that user code writes pure instead of return, in which case this would be quite easy to implement) There are still some interesting cases in my proposal. For example, if you have my second example: x - computation1 y - computation2 z - computation3 y computation4 x You might reasonably reassociate computation2 and computation3 together and desugar this to: liftA2 computation1 (computation2 = \y - computation3 y) = \(x, _z) - computation4 x But currently I desugar to: liftA2 computation1 computation2 = \(x, y) - computation3 y * computation4 x It wouldn't be too hard (and perhaps a nice exercise) to modify the desugaring to do this reassocation. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Smarter do notation
On 5 September 2011 08:35, Max Bolingbroke batterseapo...@hotmail.com wrote: (If you do want to support the type checker only generating requests for an Applicative constraint you could just insist that user code writes pure instead of return, in which case this would be quite easy to implement) I take back this parenthetical remark. Using pure instead of return only solves the most boring 1/2 of the problem :-) Using the Applicative methods to optimise do desugaring is still possible, it's just not that easy to have that weaken the generated constraint from Monad to Applicative since only degenerate programs like this one won't use a Monad method: do computation1 computation2 computation3 Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Make shared library - questions
On 26 August 2011 13:52, Sergiy Nazarenko nazarenko.ser...@gmail.com wrote: /usr/local/lib/ghc-6.12.3/base-4.2.0.2/libHSbase-4.2.0.2.a(Conc__270.o): relocation R_X86_64_32 against `base_GHCziConc_ensureIOManagerIsRunning1_closure' can not be used when making a shared object; recompile with -fPIC So it looks like you will have to recompile the base libraries with -fPIC (or try this on Windows, where PIC is unnecessary). The way I would do this personally is to recompile all of GHC, and in mk/build.mk set the GhcLibHcOpts to -fPIC. There may be a way to just recompile the base library (and dependents) using Cabal, but I'm less sure about that path. Note that using -fPIC will probably impose a performance penalty on the library code. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: Chell: A quiet test runner (low-output alternative to test-framework)
On 11 August 2011 05:17, John Millikin jmilli...@gmail.com wrote: This is just a quick package I whipped up out of frustration with test-framework scrolling an error message out of sight, for the millionth time. Patches to make test-framework less noisy (either by default or with a flag) will be gratefully accepted, if anyone wants to give it a go :-) Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: Chell: A quiet test runner (low-output alternative to test-framework)
On 11 August 2011 15:49, John Millikin jmilli...@gmail.com wrote: I tried, actually, but couldn't figure out how to separate running the test from printing its output. All the attempted patches turned into huge refactoring marathons. Just FYI test-framework already has exactly this split between running tests and printing their results. If you had wanted to change this you could have modified showImprovingTestResult in https://github.com/batterseapower/test-framework/blob/master/core/Test/Framework/Runners/Console/Run.hs. However, as someone else has already pointed out, the --hide-successes flag does what you want, and you can even make it the default for your particular testsuite by making your main be (do { args - getArgs; defaultMainWithArgs tests ([--hide-successes] ++ args) }) Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Analyzing slow performance of a Haskell program
On 7 August 2011 06:15, Chris Yuen kizzx2+hask...@gmail.com wrote: I am mainly interested in making the Haskell version perform comparatively to the C# version. Right now it is at least 5x slower so obviously I am missing something obvious) You have a map call which is immediately consumed by solve. GHCs fusion wont' help you because solve is not defined in terms of foldr. Fusing this manually (http://hpaste.org/49936) you can get 10% improvement. Another source of problems is the divMod calls. There are two issues: 1. The result of the divMod is a pair of boxed Int64s. This can be worked around by using div and mod seperately instead, but that is actually slower even though it avoids the boxing. 2. The divMod is checked: i.e. it throws a Haskell exception if the first argument is minBound or the second is 0. This means that divMod does two equality checks and one unboxing operation (which will just be an always-taken branch, thanks to pointer tagging) before it actually reaches GHC.Base.divInt# If I use divInt# and modInt# directly like so: {{{ wordLength' :: Int64 - Int64 - Int64 wordLength' !pad !n@(I64# n#) | n 10 = lenOnes n + pad | n 20 = lenTeens (n-10) + pad | n 100= splitterTen | n 1000 = splitter 100 7 | n 100= splitter 1000 8 | otherwise = splitter 100 7 where splitterTen = let -- !(!t, !x) = n `divMod` 10 t = n# `divInt#` 10# x = n# `modInt#` 10# in wordLength' (lenTens (I64# t) + pad) (I64# x) splitter !(I# d#) !suffix = let -- !(!t, !x) = n `divMod` d t = n# `divInt#` d# x = n# `modInt#` d# in wordLength' (wordLength' (suffix+pad) (I64# t)) (I64# x) }}} We sacrifice these checks but the code gets 25% faster again. I can't see anything else obviously wrong with the core, so the remaining issues are likely to be things like loop unrolling, turning div by a constant int divisor into a multiply and other code generation issues. I tried -fllvm but it has no effect. At a guess, this is because optimisations are impeded by the call to stg_gc_fun in the stack check that solve makes. In short I don't see how to get further without changing the algorithm or doing some hacks like manual unrolling. Maybe someone else has some ideas? Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Diagnose stack space overflow
On 4 July 2011 16:44, Logo Logo sarasl...@gmail.com wrote: Hi, For the following error: Stack space overflow: current size 8388608 bytes. Use `+RTS -Ksize -RTS' to increase it. I want to find out the culprit function and rewrite it tail-recursively. Is there a way to find out which function is causing this error other than reviewing the code manually? It's possible that building your program with profiling and then running with +RTS -xc will print a useful call stack. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Exclusive mode in openFile
On 28 June 2011 17:50, Gracjan Polak gracjanpo...@gmail.com wrote: It seems I'm not allowed to open same file both for writing and for reading: This behaviour is part of the Haskell 98 specification (section 21.2.3, http://www.haskell.org/onlinereport/io.html): Implementations should enforce as far as possible, at least locally to the Haskell process, multiple-reader single-writer locking on files. That is, there may either be many handles on the same file which manage input, or just one handle on the file which manages output. If any open or semi-closed handle is managing a file for output, no new handle can be allocated for that file. I've been bitten by this before and don't like it. It would be possible for GHC to enforce Unix semantics instead (there are appropriate flags to CreateFile that get those semantics on Windows), which would support more use cases. This change would have to be carefully thought through, and the report would have to be amended. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Exclusive mode in openFile
On 28 June 2011 18:56, Gracjan Polak gracjanpo...@gmail.com wrote: Anyway, where do I find an 'openFileShared' function? Packages unix/Win32 do not have obvious leads... Perhaps the functions in System.Posix.IO do what you want? http://hackage.haskell.org/packages/archive/unix/2.4.2.0/doc/html/System-Posix-IO.html. The equivalent on the Win32 side is CreateFile: http://hackage.haskell.org/packages/archive/Win32/2.2.0.2/doc/html/System-Win32-File.html There is a way to create a Handle from a Fd (http://hackage.haskell.org/packages/archive/base/latest/doc/html/GHC-IO-Handle-FD.html#v:fdToHandle), but I'm not sure if there is an equivalent to build a Handle from a Win32 HANDLE... Hope those pointers help... Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Data.List / Map: simple serialization?
If you want plain text serialization, writeFile output.txt . show and fmap read (readFile output.txt) should suffice... Max On 9 June 2011 08:23, Dmitri O.Kondratiev doko...@gmail.com wrote: Hello, Please advise on existing serialization libraries. I need a simple way to serialize Data.List and Data.Map to plain text files. Thanks, Dmitri ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Data.List / Map: simple serialization?
Hi Dmitri, On 9 June 2011 09:13, Dmitri O.Kondratiev doko...@gmail.com wrote: I wonder how Haskell will distribute memory between the buffer for sequential element access (list elements, map tree nodes) and memory for computation while reading in list, Data.Map from file? Your list only has 30,000 elements. From the description of the problem, you traverse the list several times, so GHC will create an in-memory link list that persists for the duration of all the traversals. This is OK, because the number of elements in the list is small. For the construction of the Map, it sounds like in the worst case you will have 30,000*30,000 = 900,000,000 elements in the Map, which you may not want to keep in memory. Assuming show, read and list creation are lazy enough, and as long as you use the Map linearly GHC should be able to GC parts of it to keep the working set small. You should experiment and see what happens. My advice is just write the program the simple way (with show and read, and not worrying about memory) and see what happens. If it turns out that it uses too much memory you can come back to the list with your problematic program and ask for advice. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] HUnit false-positive stumper
On 6 June 2011 02:34, KQ qu...@sparq.org wrote: The shock here is that there was only one failure, whereas the False ~=? True should have failed. I'm not sure, but at a glance it looks you might have the usual problem where compiling your test with optimisations means that GHC optimises away the test failure. This is a known problem. If you compile with -fno-state-hack (or -O0) it should work. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] HUnit false-positive stumper
On 6 June 2011 16:18, Jimbo Massive jimbo.massive-hask...@xyxyx.org wrote: Or is this bad behaviour due to HUnit doing something unsafe? I think it may be related to this bug: http://hackage.haskell.org/trac/ghc/ticket/5129 The suggested fix is to change HUnit to define assertFailure with throwIO, but the latest source code still uses throw: http://hackage.haskell.org/trac/ghc/ticket/5129 So this could very well be a HUnit bug. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] HUnit false-positive stumper
On 6 June 2011 16:43, Max Bolingbroke batterseapo...@hotmail.com wrote: The suggested fix is to change HUnit to define assertFailure with throwIO, but the latest source code still uses throw: Err, I mean http://hackage.haskell.org/packages/archive/HUnit/latest/doc/html/src/Test-HUnit-Lang.html ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Maybe Int] sans Nothings
On 23 May 2011 17:20, michael rice nowg...@yahoo.com wrote: What's the best way to end up with a list composed of only the Just values, no Nothings? http://haskell.org/hoogle/?hoogle=%3A%3A+%5BMaybe+a%5D+-%3E+%5Ba%5D Data.Maybe.catMaybes is what you want :-) Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Using cmake with haskell
On 15 May 2011 18:46, Rogan Creswick cresw...@gmail.com wrote: On Sun, May 15, 2011 at 8:21 AM, Malcolm Wallace malcolm.wall...@me.com wrote: On 5/14/11 6:12 PM, Nathan Howell wrote: Waf supports parallel builds and works with GHC without too much trouble. I'm surprised no-one has yet mentioned Shake, a build tool/library written in Haskell. It does parallel builds, multi-language working, accurate dependencies, etc etc. I use it every day at work, and it is robust, scalable, and relatively easy to use. I didn't realize Shake had been released! Thanks for pointing that out! The version Malcolm uses is closed source - he has pointed you to my poorly-documented open-source reimplementation. Any bugs+infelicities you find are my fault rather than problems with the Shake concept :-) Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Debugging with gdb?
On 13 April 2011 07:59, Svante Signell svante.sign...@telia.com wrote: As I don't know anything about Haskell, can I make a stupid question: Is there any method to create debug symbols for a Haskell program, and is it possible to debug with gdb? You cannot create debug symbols. Things that are possible: 1. You may use the -debug flag to GHC to link with the debug RTS, which has full debugging information for GDB. Note that this only lets you debug the *RTS*, not any of the code you wrote 2. Use GDB to debug your Haskell code without giving it any symbols or understanding of the Haskell calling conventions. This is very difficult. Information on this is on the GHC wiki: http://hackage.haskell.org/trac/ghc/wiki/Debugging/CompiledCode?redirectedfrom=DebuggingGhcCrashes 3. Use the GHCi debugger, which does actually work surprisingly well: http://www.haskell.org/ghc/docs/7.0.3/html/users_guide/ghci-debugger.html Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Encoding of Haskell source files
On 4 April 2011 11:34, Daniel Fischer daniel.is.fisc...@googlemail.com wrote: If there's only a single encoding recognised, UTF-8 surely should be the one (though perhaps Windows users might disagree, iirc, Windows uses UCS2 as standard encoding). Windows APIs use UTF-16, but the encoding of files (which is the relevant point here) is almost uniformly UTF-8 - though of course you can find legacy apps making other choices. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Encoding-aware System.Directory functions
On 31 March 2011 09:13, Ketil Malde ke...@malde.org wrote: -- | Try to decode a FilePath to Text, using the current locale encoding. If -- the filepath is invalid in the current locale, it is decoded as ASCII and -- any non-ASCII bytes are replaced with a placeholder. Why not map them to individual placeholders, i.e. in a private Unicode area? This way, the conversion could be invertible, and possibly even sensibly displayable given a guesstimated alternative locale (e.g. as Latin 1 if the locale is a West European one). This is what Python's PEP 383 proposes, and it is implemented in Python 3. This means that you can treat file names as strings uniformly (which is really nice), but does have disadvantages: for example, printing a string to a UTF-8 console can throw an exception if the string contains one of these surrogate characters. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] DSL for task dependencies
On 31 March 2011 12:44, oliver mueller oliver.muel...@gmail.com wrote: it's a bit sad to see that shake is completely off the table since it really looked good. I think Neil has had trouble getting permission to release the code, which is why I wrote openshake. maybe openshake can fill in here...anyone knows if it is already in a usable state? It is usable (I use it for building the data files that represent my finances) but there is 1 serious known bug I have yet to fix that causes too little to be rebuilt, and there is absolutely no documentation. I will fix both of these issues at *some* point... Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Encoding-aware System.Directory functions
On 30 March 2011 07:52, Michael Snoyman mich...@snoyman.com wrote: I could manually do something like (utf8Decode . S8.pack), but that presumes that the character encoding on the system in question is UTF8. So two questions: Funnily enough I have been thinking about this quite hard recently, and the situation is kind of a mess and short of implementing PEP383 (http://www.python.org/dev/peps/pep-0383/) in GHC I can't see how to make it easier on the programmer. As Jason points out the best you can really do is probably: 1. Treat Strings that represent filenames as raw byte sequences, even though they claim to be strings 2. When presenting such Strings to the user, re-decode them by using the current locale encoding (which will typically be UTF-8). You probably want to have some means of avoiding decoding errors here too -- ignoring or replacing undecodable bytes -- but presently this is not so straightforward. If you happen to be on a system with GNU Iconv you can use it's C//TRANSLIT//IGNORE encoding to achieve this, however. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Encoding-aware System.Directory functions
On 30 March 2011 10:20, Tako Schotanus t...@codejive.org wrote: http://www.haskell.org/pipermail/libraries/2009-August/012493.html I took from this discussion that FilePath really should be a pair of the actual filename ByteString, and the printable String (decoded from the ByteString, with encoding specified by the user's locale). The conversion from ByteString to String (and vice versa) is not guaranteed to be lossless, so you need to remember both. My understanding is that the ByteString is the one source of truth about what the file is called, and you can derive the String from that by assuming some encoding, which is what I proposed in my earlier message. I guess that as an optimisation you could cache the String decoded with a particular encoding as well, but to my mind it's not obviously worth it. I'm not sure that I agree with that. Why does it have to be loss-less? The problem, more likely, is the fact that FilePath is just a simple string. Maybe we should go the way of Java where cross-platform file access is based upon a File (or the new Path) type? An opaque Path type has been discussed before and would indeed help a lot, but it would break backwards compatibility in a fairly major way. It might be worth it, though. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] Linker flags for foreign export.
On 13 March 2011 22:02, Jason Dusek jason.du...@gmail.com wrote: Is there any case in which the empty string would be unsafe? AFAIK this stuff is only used to setup the +RTS options and some of the stuff in System.Environment. I think that the contents of the program name will only cause problems if some code that uses getProgName chokes on the empty string. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ContT and ST stack
On 10 March 2011 17:55, Bas van Dijk v.dijk@gmail.com wrote: On 10 March 2011 18:24, Yves Parès limestr...@gmail.com wrote: Why has the operator (.) troubles with a type like (forall s. ST s a)? Why can't it match the type 'b' in (.) definition? As explained by the email from SPJ that I linked to, instantiating a type variable (like 'b') with a polymorphic type (like 'forall s. ST s a' ) is called impredicative polymorphism. Since GHC-7 this is not supported any more because it was to complicated. AFAIK this decision was reversed because SPJ found a simple way to support them. Indeed, they work fine in 7.0.2 and generate warnings. Try it out: {{{ {-# LANGUAGE ImpredicativeTypes #-} module Impred where f :: Maybe (forall a. [a] - [a]) - Maybe ([Int], [Char]) f (Just g) = Just (g [3], g hello) f Nothing = Nothing }}} Unfortunately, the latest user guide still reflects the old situation: http://www.haskell.org/ghc/docs/latest/html/users_guide/other-type-extensions.html Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] Linker flags for foreign export.
On 10 March 2011 04:04, Jason Dusek jason.du...@gmail.com wrote: I'm trying to hew relatively close to Duncan Coutts' blog posting in working through this; so I have different code and a new Makefile: Starting with your code I've managed to make it work (OS X 10.6, GHC 7). The Makefile is: loadfoo: loadfoo.c gcc -arch i386 loadfoo.c -o loadfoo libfoo.dynamic-dynamic.so: Foo.hs fooinit.c ghc -fPIC -c fooinit.c ghc --make -dynamic -fPIC -c Foo.hs ghc -shared -dynamic -o libfoo.dynamic-dynamic.so \ Foo.o Foo_stub.o fooinit.o \ -lHSrts_debug-ghc7.0.1.20101215 test: loadfoo libfoo.dynamic-dynamic.so ./loadfoo libfoo.dynamic-dynamic.so (I have to build loadfoo in 32-bit mode because GHC generates 32-bit code on OS X). The fact that we supply the RTS to the GHC link line is important because the shared library would not otherwise link against any particular RTS - in the normal course of things, GHC only decides on the RTS when linking the final executable. I'm linking against the debug RTS there but it should work with the normal one too - I just used the debug version to work out why I got a bus error upon load.. .. and the reason was that your fooinit.c was buggy - that probably account for the crashes you were seeing. The problem is that hs_init takes a *pointer to* argc and argv, not argc and argv directly. You supplied 0 for both of these so GHC died when it tried to dereference them. My new fooinit.c (that works) is as follows: #include HsFFI.h extern void __stginit_Foo(void); static void Foo_init (void) __attribute__ ((constructor)); void Foo_init (void) { char *arg1 = fake-main; char *argv[1] = { arg1 }; char **argv_p = argv; char ***pargv_p = argv_p; int argc = sizeof(argv) / sizeof(char *); hs_init(argc, pargv_p); hs_add_root(__stginit_Foo); } (Apologies for the extreme ugliness, it's been so long since I wrote C in anger the chain of initializers is the only way I could get this to build without a GCC warning about casts between different pointer types). I tested it with this Foo.hs: {-# LANGUAGE ForeignFunctionInterface #-} module Foo where import Foreign.C foreign export ccall foo:: CInt - CInt foo :: CInt - CInt foo = (+1) And a change to loadfoo.c that actually tries to calls foo: printf( %s\n, symbol lookup okay); + printf(%d\n, foo(1336)); } else { (You need int (*foo)(int) = dlsym(dl, foo) as well) The output is 1337 as expected. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] Linker flags for foreign export.
Hi Jason, Following your advice, I was able to get a working main, linking the .o's (no attempt at an SO this time) with GHC. I haven't tried it, but how about this: 1. Use ghc to link a standard Haskell executable that requires your libraries. Run the link step with -v so you can see the linker flags GHC uses. 2. Use those link flags to link a .so instead. Importantly, this .so will have already been linked against the Haskell RTS, so you will be able to link it into a C program with no further dependencies. Now, there will be at least one additional complication. Before C can call into Haskell functions you will need to arrange for hs_init and hs_add_root to be executed. In order to do this you will probably have to write an additional C file, bootstrap.c. This should contain an initialization procedure that calls hs_init and hs_add_root in an function decorated with the __attribute__((constructor)) GCC extension. Bootstrap.o should then be statically linked in with the .so in order to initialise the Haskell RTS when C dynamically links in that .so. Is there a tutorial I should be following? Well-Typed's blog post on this in the early days of shared object support seemed to be doing what I was doing. Last time I checked Well-Typed's blog posts were one of the best sources of information on this rather arcane topic. There is no easy way to make this work at the moment, AFAIK :-( Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] [Haskell] Linker flags for foreign export.
Hi Jason, On 8 March 2011 05:28, Jason Dusek jason.du...@gmail.com wrote: gcc -g -Wall -O2 -fPIC -Wall -o import \ -I/usr/lib/ghc-6.12.1/include/ \ import.c exports.so In my experience, the easiest way to do this is to use gcc to build object files from C source files, and then specify those object files on the ghc command line in order to get GHC to do the linking step. This will deal with linking in the correct RTS and, if you specify appropriate -package flags, dependent packages as well. If you want a C main function then see user guide section 8.2.1.1 at http://www.haskell.org/ghc/docs/latest/html/users_guide/ffi-ghc.html. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANNOUNCE: cinvoke 0.1 released
Hi Remi, On 6 March 2011 13:38, Remi Turk rt...@science.uva.nl wrote: I am happy to finally announce cinvoke 0.1, a binding to the C library cinvoke[1], allowing functions to be loaded and called whose names and types are not known before run-time. As the author of the libffi package (http://hackage.haskell.org/package/libffi-0.1) which does a similar thing, could you say when it would be appropriate to use one or the other package? Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Linking errors when compiling projects with the ncurses-0.2 library
Hi Roman, 2011/3/5 Román González romanand...@gmail.com: ld: warning: in /Users/roman/.homebrew/lib/libncursesw.dylib, file was built for unsupported file format which is not the architecture being linked (i386) This is the problem. You are using OS X 10.6 and Homebrew is building a 64 bit ncurses. However, GHC generates 32 bit code that expects to link against a 32 bit ncurses. In MacPorts, you would solve this by reinstall ncurses with the flag +universal that instructs MacPorts to build a universal binary containing both 32 and 64 bit versions of ncurses. I'm not sure what the equivalent would be for homebrew. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Thoughts on program annotations.
On 4 March 2011 06:32, Jason Dusek jason.du...@gmail.com wrote: -- From https://github.com/solidsnack/bash/blob/c718de36d349efc9ac073a2c7082742c45606769/hs/Language/Bash/Syntax.hs data Annotated t = Annotated t (Statement t) data Statement t = SimpleCommand Expression [Expression] | ... | IfThen (Annotated t) (Annotated t) | ... I use this a variant of approach quite extensively and it works well for me. My scheme is: data Statement t = SimpleCommand Expression [Expression] | ... | IfThen (t (Statement t)) (t (Statement t)) | ... This is a slightly more efficient representation because it lets you unpack the t field of your Annotated data constructor. For example, what would in your system would be: type MyStatement = Statement (Int, Int) Would in my system be: data Ann s = Ann Int Int s type MyStatement = Statement Ann i.e. instead of allocating both a Statement and a (,) at each level we allocate just a Ann at each level. In this system you will probably find it convenient to have a typeclass inhabited by each possible annotation type: class Copointed t where extract :: t a - a instance Copointed Ann where extract (Ann _ _ x) = x Anyway, this is only a minor efficiency concern -- your scheme looks solid to me as well. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Rank-2 types in classes
On 2 March 2011 09:11, Yves Parès limestr...@gmail.com wrote: class (forall x. Monad (IM i x)) = Impl i where data IM i :: * - * - * But GHC forbids me to do so. The way I usually work around this is by doing something like the following pattern: {{{ class Monad1 m where return1 :: a - m x a bind1 :: m x a - (a - m x b) - m x b instance Monad1 (IM MyI) where return1 = ... bind1 = ... instance Monad1 m = Monad (m x) where return = return1 (=) = bind1 }}} Your class can now have a (Monad1 (IM i)) superclass context. You will have to enable a few extensions to get this through - most likely FlexibleInstances and OverlappingInstances. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Rank-2 types in classes
2011/3/2 Yves Parès limestr...@gmail.com: Is what I'm trying to do a common technique to type-ensure contexts or are there simpler methods? I don't understand your problem well enough to be able to venture a solid opinion on this. Sorry! What you have detailed so far doesn't sound too complex, though. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: unordered-containers - a new, faster hashing-based containers library
On 23 February 2011 05:31, Johan Tibell johan.tib...@gmail.com wrote: On Tue, Feb 22, 2011 at 9:19 PM, Johan Tibell johan.tib...@gmail.com wrote: Initial numbers suggest that lookup gets 3% slower and insert/delete 6% slower. The upside is O(1) size. Can someone come up with a real world example where O(1) size is important? I'm a bit sceptical that it is (I was not convinced by the earlier strict-set-inclusion argument, since that's another Data.Map feature I've never used). I thought of some other possibilities though: 1. If copying an unordered-collection to a flat array you can improve the constant factors (not the asymptotics) with O(1) size to pre-allocate the array 2. If building a map in a fixed point loop (I personally do this a lot) where you know that the key uniquely determines the element, you can test if a fixed point is reached in O(1) by just comparing the sizes. Depending what you are taking a fixed point of, this may change the asymptotics 3. Some map combining algorithms work more efficiently when one of their two arguments are smaller. For example, Data.Map.union is most efficient for (bigmap `union` smallmap). If you don't care about which of the two input maps wins when they contain the same keys, you can improve constant factors by testing the size of the map input to size (in O(1)) and flipping the arguments if you got (smallmap `union` bigmap) instead of the desirable way round. Personally I don't find any of these *particularly* compelling. But a ~6% slowdown for this functionality is not too bad - have you had a chance to look at the core to see if the cause of the slowdown manifests itself at that level? Perhaps it is possible to tweak the code to make this cheaper. Also, what was the size of the collections you used in your benchmark? I would expect the relative cost of maintaining the size to get lower as you increased the size of the collection. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: unordered-containers - a new, faster hashing-based containers library
On 23 February 2011 12:05, Gregory Collins g...@gregorycollins.net wrote: I've been working on one lately, some preliminary benchmarks: https://gist.github.com/826935 It's probably a month or two away from a releasable state, but my work-in-progress is substantially faster (4-6X) than Data.Hashtable for inserts and lookups. Sounds cool. Can you use your HashTable in the ST monad? I have often found myself wanting a mutable hashtable as a subcomponent in an externally-pure computation. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: unordered-containers - a new, faster hashing-based containers library
On 23 February 2011 16:03, Johan Tibell johan.tib...@gmail.com wrote: Thanks for the examples. Point 3 is interesting but most of the gain there could probably be had by telling the user to use (bigmap `union` smallmap). My guess is that the user has a good idea which argument is larger/smaller. Totally agreed - this matches my experience with Map/Set. And I didn't see anything that looked particularly bad. The core uses unboxed values everywhere possible and the recursive function (i.e. inserts) returns an unboxed pair (# Int#, Tree k v #). Right. I was wondering if you were returning (Int, Tree k v), in which case CPR wouldn't unbox the Int - but I see you already thought of that issue. By the way, do you plan to add a HashSet to complement HashMap? Thanks for all the work on the library! Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: unordered-containers - a new, faster hashing-based containers library
On 23 February 2011 21:27, Gwern Branwen gwe...@gmail.com wrote: On Wed, Feb 23, 2011 at 1:18 PM, Johan Tibell johan.tib...@gmail.com wrote: Could you manually look at some of them to see if you find something interesting. In particular `Set.size s == 0` (a common use of size in imperative languages) could be replaced by `Set.null s`. You could look at them yourself; I attached the files. I see 6 uses out of ~100 which involve an == 0 Thanks for bringing some data to the table. There are definitely some common patterns in what you sent me: 1) For defining Binary instances, you need to write set size before you write the elements: ~7 occurrences 2) Tests against small constants (typically = 0 or 1, but also 2 and 3!): ~15 occurrences 3) A surprise to me: generating fresh names! People keep a set of all names generated so far, and then just take size+1 as their fresh name. Nice trick. ~17 occurrences 4) Turning sizes into strings for human consumption: ~19 occurrences 5) Just reexporting the functions somehow. Uninformative. ~8 occurrences There were ~38 occurrences over ~13 repos where it appeared to be somehow fundamental to an algorithm (I didn't look into most of these in detail). I've put those after my message. Frankly I am surprised how much size gets used. It seems that making it fast is more important than I thought. Cheers, Max === Fundamental-looking occurrences: bin/folkung/folkung/Clausify.hs: siz (And ps)= S.size ps bin/folkung/folkung/Paradox/Flatten.hs:| S.size cl = 1 + S.size bs bin/folkung/folkung/Paradox/Flatten.hs: || S.size cl = S.size cl' = largestClique cl gr' bin/folkung/folkung/Paradox/Flatten.hs: -- S.size (free xs) = 1 bin/folkung/folkung/Paradox/Flatten.hs:n = S.size (free ls) bin/folkung/folkung/Paradox/Flatten.hs: , S.size ws n-1 bin/folkung/folkung/Paradox/Flatten.hs:(S.size s1,tpsize v2,inter s2) `compare` (S.size s2,tpsize v1,inter s1) bin/folkung/folkung/Paradox/Flatten.hs:sum [ S.size (s `S.intersection` vs) | (v,vs) - cons, v `S.member` s ] bin/folkung/folkung/Paradox/Flatten.hs:, S.size ws' S.size freeRight-1 bin/folkung/folkung/Paradox/Solve.hs: degree x = S.size . free $ x bin/gf/src/compiler/GF/Compile/GeneratePMCFG.hs: | product (map Set.size ys) == count bin/gf/src/compiler/GF/Speech/CFGToFA.hs:indeg (c,_) = maybe 0 Set.size $ Map.lookup c ub bin/gf/src/compiler/GF/Speech/CFGToFA.hs: where (fa', ns) = newStates (replicate (Set.size cs) ()) fa bin/gf/src/GF/Compile/GeneratePMCFG.hs: | product (map Set.size ys) == count = bin/gf/src/GF/Compile/GeneratePMCFGOld.hs: | product (map Set.size ys) == count = bin/gf/src/GF/Data/MultiMap.hs:size = sum . Prelude.map Set.size . Map.elems bin/gf/src/GF/Speech/CFGToFA.hs:indeg (c,_) = maybe 0 Set.size $ Map.lookup c ub bin/gf/src/GF/Speech/CFGToFA.hs: where (fa', ns) = newStates (replicate (Set.size cs) ()) fa bin/halfs/Halfs/FSRoot.hs: $ Set.size $ fsRootAllInodeNums fsroot bin/halfs/Halfs.hs:when (Set.size freeSet /= length freeList) ( bin/hcorpus/hcorpus.hs: let rank = 1 `fidiv` Set.size wrds, bin/hoogle/src/Hoogle/DataBase/TypeSearch/Binding.hs:g (l, vs) = Just $ [restrict|isJust l] ++ replicate (max 0 $ Set.size vs - 1) var bin/htab/src/Formula.hs:countNominals f = Set.size $ extractNominals f bin/hylores-2.5/src/HyLoRes/Clause/BasicClause.hs:size = Set.size . toFormulaSet bin/hylores-2.5/src/HyLoRes/Subsumption/ClausesByFormulaIndex.hs: let sortedCandidates = sortBy (compareUsing Set.size) subsCandidates bin/hylores-diego/src/HyLoRes/Clause/BasicClause.hs:size = Set.size . toFormulaSet bin/hylores-diego/src/HyLoRes/Subsumption/ClausesByFormulaIndex.hs: let sortedCandidates = sortBy (compareUsing Set.size) subsCandidates bin/ipclib/Language/Pepa/Compile/States.hs:Just limit - ((stateSpaceSize seen) + (Set.size stack)) limit bin/jhc/src/Grin/SSimplify.hs:v n | n `IS.member` s = v (1 + n + IS.size s) bin/jhc/src/Ho/Build.hs:maxModules - Set.size `fmap` countNodes cn bin/jhc/src/Ho/Build.hs:maxModules - Set.size `fmap` countNodes cn bin/lhc/src/Grin/Optimize/CallPattern.hs: nPatterns = Set.size callPatterns bin/proteinvis/Graphics/Visualization/Tree/Geometry.hs:where theta = ((fromIntegral index - (fromIntegral (S.size . fst . S.split (Name name) $ missingLeaves))) * 2 * 3.14157) / (num_leaves - fromIntegral (S.size missingLeaves)) bin/proteinvis/ProgramState.hs: c = S.size go bin/proteinvis/Protein.hs: , term_count = S.size terms bin/proteinvis/Protein.hs: , term_count = S.size terms bin/protocol-buffers/hprotoc/Text/ProtocolBuffers/ProtoCompile/Resolve.hs: when (Set.size numbers /= Seq.length (D.DescriptorProto.field dp)) $
Re: [Haskell-cafe] ANN: unordered-containers - a new, faster hashing-based containers library
On 20 February 2011 03:40, Louis Wasserman wasserman.lo...@gmail.com wrote: I'd like to complain about that, too ;) I'm curious, when do people find that they need a really fast way to get map size? I use them quite a lot, and almost never call length - and when I do, it is typically only to get some statistics about the map for user display - certainly not in an inner loop. I find it is still useful to have a way to quickly test if a map is empty (for e.g. fixed points that slowly grow a map by unioning in a new bit every time) but null works OK for that purpose. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Alex -g
On 20 February 2011 19:56, Mihai Maruseac mihai.marus...@gmail.com wrote: Hi, When running Alex -g I get several warning telling me that a bang pattern is required and that the warning will be an error in GHC 6.14. As it happens, that is not an error in GHC 7 (see http://hackage.haskell.org/trac/ghc/ticket/3278): $ ghci -XMagicHash GHCi, version 7.0.1.20101215: http://www.haskell.org/ghc/ :? for help letLoading package ghc-prim ... linking ... done. Loading package integer-gmp ... linking ... done. Loading package base ... linking ... xdone. Loading package ffi-1.0 ... linking ... done. Prelude let x = 2# in 10 interactive:1:5: Warning: Bindings containing unlifted types should use an outermost bang pattern: x = 2# In the expression: let x = 2# in 10 In an equation for `it': it = let x = 2# in 10 interactive:1:5: Warning: Bindings containing unlifted types should use an outermost bang pattern: x = 2# In the expression: let x = 2# in 10 In an equation for `it': it = let x = 2# in 10 10 (I'm not quite sure why I get two identical warnings..) It looks like Alex has gone back and forth on what to do here. Selected commits: Fri Nov 26 10:43:42 GMT 2010 Simon Marlow marlo...@gmail.com * don't use CPP for LANGUAGE pragmas Drop the -fno-warn-lazy-unlifted-bindings again. CPP in the header prevents the user from adding their own LANGUAGE pragmas at the top of the .x file if they're using GHC 7.0. Wed Nov 17 10:42:23 GMT 2010 Simon Marlow marlo...@gmail.com * Stop using -fglasgow-exts, and turn off bogus bang-pattern warnings Wed Nov 17 10:41:06 GMT 2010 Simon Marlow marlo...@gmail.com * remove bang patterns on unlifted bindings again Wed Jun 3 23:58:54 BST 2009 Ian Lynagh ig...@earth.li * Use bang patterns on unlifted bindings So it looks like you should just ignore this warning, or turn it off :-) Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Sub-optimal [code]
On 16 February 2011 21:51, Andrew Coppin andrewcop...@btinternet.com wrote: (Now, if only there was a version that feeds an integer to the monadic action as well... Still, it's not hard to implement.) As simple as: forM [1..x] mk_my_action ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Sub-optimal [code]
On 16 February 2011 22:48, Daniel Fischer daniel.is.fisc...@googlemail.com wrote: The problem with that is that under certain circumstances the list is shared in nested loops, which was what caused the thread (it was mapM_ and not forM_, but I'd be very surprised if they behaved differently with -O2). Yep - d'oh! Thinking about it some more, this example is actually quite interesting because if you *prevent* the list from being floated the forM gets foldr/build fused into a totally listless optimal loop. It really does seem like a shame to disable that optimisation because of the floating... if only the fusion hit before float-out was run. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] upgrading mtl1 to mtl2
On 17 February 2011 07:28, Sebastian Fischer fisc...@nii.ac.jp wrote: I must admit I still don't understand your exact problem. Could you help me with an example where using mtl2 requires an additional (Functor m) constraint that is not required when using mtl1? I think the problem is that the mtl1 Functor instances looked like: instance Monad m = Functor (ReaderT e m) where fmap = ... But the mtl2/transformers instances look like: instance Functor f = Functor (ReaderT e f) where fmap = ... This is overall an improvement, but it does mean that if you relied on getting fmap for e.g. a (ReaderT e m) monad from a (Monad m) constraint with mtl1 then your code is now broken. You need to add (Functor m) to your context for mtl2. Naturally, this would not be a problem if Functor were a Monad superclass.. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] rewrite rules to specialize function according to type class?
2011/2/15 Gábor Lehel illiss...@gmail.com: This is a semi-related question I've been meaning to ask at some point: I suppose this also means it's not possible to write a class, write some rules for the class, and then have the rules be applied to every instance? (I.e. you'd have to write them separately for each?) This does work, because it doesn't require the simplifier to lookup up class instances. However, it's a bit fragile. Here is an example: class Foo a where foo :: a - a bar :: a - a foo_bar :: a - a {-# RULES foo/bar forall x. foo (bar x) = foo_bar x #-} instance Foo Bool where foo = not bar = not foo_bar = not instance Foo Int where foo = (+1) bar x = x - 1 foo_bar = (+2) {-# NOINLINE foo_barish #-} foo_barish :: Foo a = a - a foo_barish x = foo (bar x) main = do print $ foo (bar False) -- False if rule not applied, True otherwise print $ foo (bar (2 :: Int)) -- 2 if rule not applied, 4, otherwise print $ foo_barish False -- False if rule not applied, True otherwise print $ foo_barish (2 :: Int) -- 2 if rule not applied, 4, otherwise With GHC 7, the RULE successfully rewrites the foo.bar composition within foo_barish to use foo_bar. However, it fails to rewrite the two foo.bar compositions inlined directly in main. Thus the output is: False 2 True 4 The reason it cannot rewrite the calls in main is (I think) because the foo/bar class selectors are inlined before the rule matcher gets to spot them. By using NOINLINE on foo_barish, and ensuring that foo_barish is overloaded, we prevent the simplifier from doing this inlining and hence allow the rule to fire. What is more interesting is that I can't get the foo (bar x) rule to fire on the occurrences within main even if I add NOINLINE pragmas to the foo/bar names in both the class and instance declarations. Personally I would expect writing NOINLINE on the class declaration would prevent the class selector being inlined, allowing the rule to fire, but that is not happening for some reason. Perhaps this is worth a bug report on the GHC trac? It would at least give it a chance of being fixed. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] rewrite rules to specialize function according to type class?
2011/2/15 Simon Peyton-Jones simo...@microsoft.com: but currently any pragmas in a class decl are treated as attaching to the *default method*, not to the method selector: I see. I didn't realise that that was what was happening. Personally I find this a bit surprising, but I can see the motivation. Of course, a sensible alternative design would be to have them control the selectors, and then you could declare that you want your default methods to be inlined like this: {{{ class MyClass a where foo :: a - a foo = default_foo {-# INLINE default_foo #-} default_foo = ... big expression ... }}} I think this design+workaround is slightly preferable to your proposal because it avoids clients of a library defining a class from having to write instances with decorated names. But maybe it's not such a big win as to be worth making the change. In any event, perhaps it would be worth warning if you write an INLINE pragma for some identifier in a class declaration where no corresponding default method has been declared, in just the same way you would if you wrote an INLINE pragma for a non-existant binding? Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] rewrite rules to specialize function according to type class?
On 15 February 2011 11:23, Roman Leshchinskiy r...@cse.unsw.edu.au wrote: I wouldn't necessarily expect this to guarantee inlining for the same reason that the following code doesn't guarantee that foo gets rewritten to big: foo = bar {-# INLINE bar #-} bar = big It might work with the current implementation (I'm not even sure if it does) but it would always look dodgy to me. In this case there doesn't seem to be any point inlining anyway, because nothing is known about the context into which you are inlining. Nonetheless, what will happen (I think) is that any users of foo will get the definition of foo inlined (because that doesn't increase program size) so now they refer to bar instead. Now GHC can look at the use site of bar and the definition of bar and decide whether it is a good idea to inline. Basically, I expect the small RHS for the default in my class declaration to be inlined unconditionally, and then GHCs heuristics will determine how and when to inline the actual default definition (e.g. default_foo). This differs from the current story in that with the present setup you can write the INLINE and default method directly in the class definition, and then GHC does not need to inline the small RHS of the default to get a chance to apply its inlining heuristics on the actual default method. However, given that these small RHSes *should* be inlined eagerly and ubiquitously, there shouldn't be a detectable difference writing default methods directly and the proposed pattern for adding INLINE pragmas to default methods. Also, what if I write: class MyClass a where foo :: a - a foo x = default_foo x I assume this wouldn't guarantee inlining? I don't know about any guarantee -- again personally I would only hope the inlining would only occur should GHC decide it is worth it -- but this still looks like it should be OK under the no-size-increase inlining heuristic. I think the simplifier will probably avoid actually inlining unless foo is applied to at least 1 arg to avoid increasing allocation, but any interesting use site will meet that condition. I do not really know what the simplifier does in enough detail to know exactly what will happen here, though. This is just an educated guess as to what will happen, which makes me think that my proposed pattern is OK. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] rewrite rules to specialize function according to type class?
On 15 February 2011 15:12, Roman Leshchinskiy r...@cse.unsw.edu.au wrote: Ah, but you assume that bar won't be inlined into foo first. Consider that it is perfectly acceptable for GHC to generate this: foo = big {-# INLINE bar #-} bar = big We did ask to inline bar, after all. Well, yes, but when considering the use site for foo don't we now inline the *original RHS* of foo? This recent change means that it doesn't matter whether bar gets inlined into foo first - use sites of foo will only get a chance to inline the bar RHS. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] rewrite rules to specialize function according to type class?
On 15 February 2011 16:45, Roman Leshchinskiy r...@cse.unsw.edu.au wrote: Only if foo has an INLINE pragma. Otherwise, GHC uses whatever RHS is available when it wants to inline. Ah, I see! Well yes, in that case my workaround is indeed broken in the way you describe, and there is no way to repair it because in my proposal you wouldn't be able to write an INLINE pragma on the actual default method definition. Thanks for pointing out my error. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Vector library
On 14 February 2011 10:19, Pierre-Etienne Meunier pierreetienne.meun...@gmail.com wrote: For instance, it allows you to program a single function that works for any dimensionality of the array. I don't know how vector handles it, but you may be interested to look at the repa library (http://hackage.haskell.org/packages/archive/repa/1.1.0.0/doc/html/Data-Array-Repa.html). This lets you write dimensionality-polymorphic functions by using the Shape typeclass: http://hackage.haskell.org/packages/archive/repa/1.1.0.0/doc/html/Data-Array-Repa-Shape.html#t:Shape Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] rewrite rules to specialize function according to type class?
On 14 February 2011 21:43, Patrick Bahr pa...@arcor.de wrote: Am I doing something wrong or is it not possible for GHC to dispatch a rule according to type class constraints? As you have discovered this is not possible. You can write the rule for as many *particular* types as you like, but you can't write it in a way that abstracts over the exact type class instance you mean. This is a well known and somewhat tiresome issue. I think the reason that this is not implemented is because it would require the rule matcher to call back into the type checking machinery to do instance lookup. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Sub-optimal
On 14 February 2011 21:00, Andrew Coppin andrewcop...@btinternet.com wrote: Is this a known bug? (GHC 6.10.x) It's known to happen when optimising shares what shouldn't be shared. Try compiling with -O2 -fno-cse (if that doesn't help, it doesn't necessarily mean it's not unwanted sharing, though). And, please, let us see some code to identify the problem. I tried -O2 -fno-cse. No difference. I also tried -O2 -fno-full-laziness. BIG DIFFERENCE. See also the very old GHC ticket at http://hackage.haskell.org/trac/ghc/ticket/917 Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] MonadPeelIO instance for monad transformers on top of forall
On 5 February 2011 02:35, Sebastian Fischer fisc...@nii.ac.jp wrote: I have not used monad-peel before so please ignore my comment if I am missing something obvious. The documentation mentions that Instances of MonadPeelIO should be constructed via MonadTransPeel, using peelIO = liftPeel peelIO. I think this would work at least in your simplified example as ReaderT is an instance of MonadTransPeel. Maybe you can take the same route with your actual transformer? I did indeed try this, but the error message was even more incomprehensible than my attempt to lift the MonadPeelIO (ReaderT () IO) instance directly! My reading of the error was that it suffers from the same problem (too little polymorphism). Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] MonadPeelIO instance for monad transformers on top of forall
On 5 February 2011 04:19, Anders Kaseorg ande...@mit.edu wrote: Just to demonstrate that I didn’t use the triviality of ReaderT (), here’s a less trivial example with ReaderT and StateT: This is superb, thank you! I would never have come up with that :-) It still seems to fail somehow on my real example, but I'm confident I can get it working now. Many thanks! Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] OSX i386/x86 and x86_64 - time to switch supported platforms?
On 4 February 2011 02:35, Steve Severance st...@medwizard.net wrote: Wholly support moving OSX to x64. x86 should be supported only on a best effort basis for legacy. Moving from x86 to x64 has advantages and disadvantages from my POV. Advantages: * Able to address more memory * More registers for code generation * Haskell dependencies wouldn't need to be built for x86 on Snow Leopard (though if we swapped to x64 on Leopard as well, the Leopard users would start having to build 64-bit libraries specially) Disadvantages: * Pointers become wider, and Haskell data structures mostly consist of pointers. This will bloat memory use of Haskell programs. * Generated binaries won't work on older Macs that don't have a 64-bit OS/CPU. This is important if you are distributing compiled Haskell binaries, which is not something I personally do but which is probably important to support Did I miss anything? I don't know if anyone using a 64-bits GHC on e.g. Linux has reported experience of whether moving to 64-bits is a net win or not from a performance perspective. My guess is that it is a win for certain classes of programs (numerically intensive, high performance Haskell), and a loss for programs making extensive use of laziness, boxed data structures etc. I notice that there is some work towards standardisation of a x32 ABI for 64-bit applications using thin, 32 bit pointers. See http://robertmh.wordpress.com/2011/01/19/finally-amd32-is-taking-shape/. This might be an interesting thing to explore when it becomes more fully developed. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] How to #include into .lhs files?
On 4 February 2011 05:03, Michael Snoyman mich...@snoyman.com wrote: My guess (a complete guess) is that the deliterate step is creating a temporary .hs file elsewhere on your filesystem, which is why the CPP step can't find B.hs without a fully-qualified path. That is what is happening (you can confirm with ghc -v). Interestingly, you can work around by compiling with -I. on the command line: {{{ mbolingbroke@equinox ~/Junk/lhscpp $ cat Test.lhs Included.h {-# LANGUAGE CPP #-} module Main (main) where #include MachDeps.h #include Included.h main :: IO () main = print MY_CONSTANT #define MY_CONSTANT 2 mbolingbroke@equinox ~/Junk/lhscpp $ ./Test 2 }}} Perhaps worth filing a bug report? Cheers Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] MonadPeelIO instance for monad transformers on top of forall
Hi Anders, I'm using your monad-peel package to good effect in my project, but I'm having trouble lifting peelIO through a particular monad. What follows is a simplified description of my problem. Say I have this monad: {{{ data M a = M { unM :: forall m. MonadPeelIO m = Reader.ReaderT () m a } instance Monad M where return x = M (return x) M mx = fxmy = M $ mx = unM . fxmy instance MonadIO M where liftIO io = M (liftIO io) }}} It seems clear that there should be a MonadPeelIO instance for M, but I can't for the life of me figure it out. I've tried: {{{ instance MonadPeelIO M where peelIO = M (fmap (\peel (M mx) - liftM M (peel mx)) peelIO) }}} But this is not polymorphic enough: the peelIO gives me back a function (ReaderT () m a - IO (ReaderT () m a)) for some *fixed* m, so I can't pack the result of that function into an M again using (liftM M). Have you (or the big brains on Haskell-Cafe, who are CCed) come across this before? Is there an obvious solution I am missing? Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] MonadPeelIO instance for monad transformers on top of forall
On 4 February 2011 21:41, Anders Kaseorg ande...@mit.edu wrote: On Fri, 4 Feb 2011, Max Bolingbroke wrote: data M a = M { unM :: forall m. MonadPeelIO m = Reader.ReaderT () m a } Maybe this won’t help in your actual code, but isn’t M isomorphic to IO (via unM :: M a - IO a, M . liftIO :: IO a - M a)? Well, yes :-). My real code actually has a non-trivial ReaderT transformer and a StateT transformer it reaches m. I had hoped that by restricting to ReaderT () the problem would be simpler and hence clearer. instance MonadPeelIO M where peelIO = M (liftIO (liftM (\k (M mx) - liftM (\my - (M (liftIO my))) (k mx)) peelIO)) This doesn't type check for me (I think the root of the trouble is you peelIO in the IO monad, but then later do (k mx) where mx is not an IO computation). Is this definition trying to exploit the isomorphism, or do you think that this is a solution to the general class of problems I'm having trouble with? Thanks for your reply! Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] ANN: HackageOneFive: Reverse dependency lookup for all packages on Hackage
On 2 February 2011 11:57, Simon Hengel simon.hen...@wiktory.org wrote: Hello, I wrote a tiny Snap app that provides reverse dependency lookup for all packages on Hackage. Are you familiar with Roel van Djik's revdep Hackage? http://bifunctor.homelinux.net/~roel/hackage/packages/hackage.html It's usually quite up to date, and very useful. btw: Is there still progress on Hackage 2.0? Last I heard was that Matt Gruen had done quite a lot of work on it in the last Summer of Code (see http://cogracenotes.wordpress.com/), but they haven't got around to actually deploying the new version - but that was a few months ago. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] The implementation of Control.Exception.bracket
On 31 January 2011 14:17, Leon Smith leon.p.sm...@gmail.com wrote: Is there some subtle semantic difference? Is there a performance difference? It seems like a trivial thing, but I am genuinely curious. According to my understanding the two should have equivalent semantics. As for performance, I whipped up a trivial Criterion microbenchmark and the version that doesn't use finally seems to consistently benchmark 32ns (33%) faster than the version that does use it, likely because it avoids a useless mask/restore pair. (Note that this result is reversed if you compile without -O2, I guess because -O2 optimises the library finally enough to overcome the fact that it does an extra mask). Code in the appendix. Cheers, Max === {-# LANGUAGE Rank2Types #-} import Control.Exception import Criterion.Main {-# NOINLINE bracket_no_finally #-} bracket_no_finally :: IO a - (a - IO b) - (a - IO c) - IO c bracket_no_finally before after thing = mask $ \restore - do a - before r - restore (thing a) `onException` after a _ - after a return r {-# NOINLINE bracket_finally #-} bracket_finally :: IO a - (a - IO b) - (a - IO c) - IO c bracket_finally before after thing = mask $ \restore - do a - before r - restore (thing a) `finally` after a return r {-# NOINLINE test_bracket #-} test_bracket :: (forall a b c. IO a - (a - IO b) - (a - IO c) - IO c) - IO () test_bracket bracket = bracket (return ()) (\() - return ()) (\() - return ()) main = defaultMain [ bench finally $ test_bracket bracket_finally , bench no finally $ test_bracket bracket_no_finally ] ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] source line annotations
On 19 January 2011 23:59, Evan Laforge qdun...@gmail.com wrote: Another thing is that performance in logging is pretty critical. When I profile, logging functions wind up in the expensive list if I'm not careful. I don't know bad an unsafePerformIO plus a catch for every log msg is going to be, but it's not going to be as fast as doing the work at compile time. It is possible that GHC's optimiser will let-float the getSourceLoc assert application to the top level, which would ensure that you only take the hit the first time. However, I'm not sure about this - do an experiment! Since logging is in IO anyway, perhaps GHC could provide a new magic primitive, (location :: IO String), which was replaced by the compiler with an expression like (return MyLocation:42:1). Your consumer could then look like myLogFunction location extra-info (myLogFunction is in IO and would deal with =ing in the location). This would be much less of a potential performance drag than going through exceptions. Shouldn't be too hard to patch GHC to do this, either. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building lambdabot
On 20 January 2011 17:30, Gwern Branwen gwe...@gmail.com wrote: * You need to loosen the base upper bound to 4.4 * If using base = 4, you need to depend on the syb package as well (current version 0.3) Would this break GHC 6.12 builds? Thats why I suggested the flag stanza. Cabal has a weird kind of flag semantics where it will try every possible combination of flags until it finds one that builds. The solution I suggested uses this behaviour to either depend on base = 4 4.4 WITH syb, OR base 4 WITHOUT syb. Because of the default flag setting of True, the first possibility will be tried first, but it if fails Cabal will just fall back on base 4. In short, it should work perfectly for either GHC 7 or 6.12 clients (modulo syntax issues - I haven't actually tried the syntax I sent you). data-memocombinators is only the tip of the iceberg; I believe much of lambdabot would need modifications. (Apparently Control.OldException has gone away, which alone guarantees many changes.) So there wouldn't be much point to changing show. Actually, I: * Fixed the show upper bound * Fixed data-memocombinators and data-inttrie upper bounds (and sent a pull request to get Luke to take these changes upstream) * Got Wouter to change the upper bound on IOSpec (will be in the 0.2.2 release, out soon) And after all that lambdabot seemed to compile OK (though I got a link-time error about iconv because I'm on a Mac). So if you fix show GHC 7 users will be able to cabal install lambdabot! Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building lambdabot
On 20 January 2011 20:50, Gwern Branwen gwe...@gmail.com wrote: Notice the flag defaults to False, not True. When I tried it with True, I got: $ cabal install Resolving dependencies... cabal: dependencies conflict: base-3.0.3.2 requires syb ==0.1.0.2 however syb-0.1.0.2 was excluded because syb-0.3 was selected instead syb-0.1.0.2 was excluded because show-0.4.1 requires syb ==0.3.* Suprising. It certainly seems to contradict my understanding of how Cabal works (which is derived from http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/authors.html#conditional-resolution) Anyway, I'm glad that you have recorded a patch which at least gives GHC 7 a chance of working. If I knew where the repo was I would try it out (it doesn't seem to be linked anywhere from http://hackage.haskell.org/packages/archive/show/0.4.1/show.cabal). Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building lambdabot
On 20 January 2011 22:39, Gwern Branwen gwe...@gmail.com wrote: It's just a folder in the lambdabot repo, as is lambdabot-utils and unlambda and brainfuck. I'm happy to report that show-0.4.1.1 does in fact build in my GHC 7 environment unmodified, so the Cabal hackery is working. You might consider pushing this tweaked version to Hackage. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] source line annotations
On 19 January 2011 22:43, Evan Laforge qdun...@gmail.com wrote: Reasons why I don't actually want this after all? Are you aware of the magic assert function? http://hackage.haskell.org/packages/archive/base/latest/doc/html/Control-Exception-Base.html#v:assert The compiler spots it and replaces it with a function that raises an exception that contains the source location if the assert fails. It is deeply evil, but you can write a library function like this (untested): getSourceLoc :: (Bool - a - a) - String getSourceLoc my_assert = unsafePerformIO (evaluate (my_assert False (error Impossible)) `catch` (\(AssertionFailed s) - return s)) Now your consumers can write: getSourceLoc assert :: String And you will have a value that describes where you are. You can then use this to implement logging, throw errors etc. The annoying part is that you have to explicitly pass in assert from the call site to make sure that you get the right source location reported. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building lambdabot
On 18 January 2011 19:32, Joe Bruce bruce.jo...@gmail.com wrote: I did them both from scratch, just to make sure I understood what was going on and could communicate it for the next poor sap that is as ignorant as I was. Congratulations :-) Should/can I update the haskellwiki with this install process? Should the mac readline info go there too? That sounds like a good thing to do. Also, oo you know if there's any reason that the most recent lambdabot is not pushed to Hackage? That might make things even easier for others who wish to install it. It certainly confused me! Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] is datetime actively maintained?
Hi, I also wanted to build gitit on Windows and encountered the datetime issue. I sent the maintainer (Eric Sessoms) a request to bump his version bounds on the 22nd December, but haven't received a reply. (I had to bump a lot of other gitit version bounds, but those bumps *have* been taken upstream) I'm not sure what to do in a situation like this - since Hackage has no security I could just upload a new version of datetime with the bumped bounds, but this hardly seems polite, and I'm not really keen on maintaining datetime long term. The change would also be lost if Eric decides to upload a new version of his own. Cheers, Max On 17 January 2011 02:05, Carter Schonwald carter.schonw...@gmail.com wrote: Hey All, I can't tell if datetime is actively maintained, and its version constraints need to be adjusted to allow base4 (which would then allow a whole mess of other packages including gitit build under ghc7). I've been sucessful building gitit with ghc7 64 on OS X and using it if that one constraint on datetime is modified. any thoughts? cheers, -Carter ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building lambdabot
On 7 January 2011 06:20, Joe Bruce bruce.jo...@gmail.com wrote: Thanks Max. That makes a lot of sense. That change got me to the point of linking lambdabot. Well, I tried to see if I could reproduce your problem but didn't get to this stage. It looks like v4.2.2.1 from Hackage hasn't been updated for donkeys years and breaks massively because of at least the new exceptions library, mtl 2.0 and the syb package being split off from base. I started to fix these issues but lost the will to live at step [39 of 81] Compiling Plugin.Eval. Do you have access to some fixed source code that I might be able to build on GHC 7? readline is compiled with iconv from MacPorts, I don't think readline links against iconv. What that error says to me is that GHC is failing to link against *any* iconv. I bet that it's because you have iconv installed via Macports without +universal. Try: $ file /opt/local/lib/libiconv.a It should say: /opt/local/lib/libiconv.a: Mach-O universal binary with 2 architectures /opt/local/lib/libiconv.a (for architecture i386): current ar archive random library /opt/local/lib/libiconv.a (for architecture x86_64):current ar archive random library If it doesn't, do: $ sudo port install libiconv +universal And then link Lambdabot again. Does that help? Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building lambdabot
On 6 January 2011 07:27, Joe Bruce bruce.jo...@gmail.com wrote: Now I'm stuck on readline again [lambdabot build step 28 of 81]: /Users/joe/.cabal/lib/readline-1.0.1.0/ghc-6.12.3/HSreadline-1.0.1.0.o: unknown symbol `_rl_basic_quote_characters' This seems to have been covered on Stack Overflow: http://stackoverflow.com/questions/1979612/using-cabal-readline-package-on-i386-macbook-snow-leopard ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Building lambdabot
On 6 January 2011 16:11, Joe Bruce bruce.jo...@gmail.com wrote: Thanks, Max. I had seen that thread already, but I don't understand how it helps me. I'm on a x64 mac and I have both an i386 and x64 version of readline installed (via macports install readline +universal). Perhaps cabal is choosing the wrong one. How do I find out? How do I tell it which to use? And which do I want it to use? Well, MacPorts +universal should install fat binaries that include both x86 and x64. On my machine: mbolingbr...@c089 ~ $ file /opt/local/lib/libreadline.5.0.dylib /opt/local/lib/libreadline.5.0.dylib: Mach-O universal binary with 2 architectures /opt/local/lib/libreadline.5.0.dylib (for architecture i386): Mach-O dynamically linked shared library i386 /opt/local/lib/libreadline.5.0.dylib (for architecture x86_64): Mach-O 64-bit dynamically linked shared library x86_64 mbolingbr...@c089 ~ $ file /opt/local/lib/libreadline.a /opt/local/lib/libreadline.a: Mach-O universal binary with 2 architectures /opt/local/lib/libreadline.a (for architecture i386): current ar archive random library /opt/local/lib/libreadline.a (for architecture x86_64): current ar archive random library GHC on OS X builds 32 bit executables, so you need to link against these binaries. Long story short: $ cabal unpack readline Then edit readline.cabal to add this line to the end: extra-lib-dirs: /opt/local/lib (You must indent it to put it in the library section). Finally: $ sudo cabal install --configure-option=--with-readline-includes=/opt/local/include --configure-option=--with-readline-libraries=/opt/local/lib It will now work. You can try it by opening ghci and typing: System.Console.Readline.readline Command Me An alternative to editing the cabal file would have been to link to the Macports fat-binary libreadline.a from the OS X system library directories (maybe /usr/lib or /usr/local/lib). Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Lazy cons, Stream-Fusion style?
Hi Stephen, On 2 January 2011 13:35, Stephen Tetley stephen.tet...@gmail.com wrote: Can a lazy cons be implemented for (infinite) Streams in the Stream-Fusion style? I made a mailing list post about almost exactly this issue a while ago (http://www.mail-archive.com/haskell-cafe@haskell.org/msg82981.html). There was no really nice solution offered: I think the best you can do is define your stream operations with a lazy pattern match using my eta trick from that post: eta stream = Stream (case stream of Stream s _ - unsafeCoerce s :: ()) (case stream of Stream _ step - unsafeCoerce step) Then instead of writing: f (Stream x y) = ... Write: f (eta - Stream x y) = ... (This is necessary because lazy pattern matches on existential data constructors using ~ cannot even be expressed in System FC, so it is unclear how GHC could implement them). Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] UTF-8 in Haskell.
On 23 December 2010 05:29, Magicloud Magiclouds magicloud.magiclo...@gmail.com wrote: If so, OK, then I think I could make a packInt which turns an Int into 4 Word8 first. Thus under all situation (ascii, UTF-8, or even UTF-32), my program always send 4 bytes through the network. Is that OK? I think you are describing the UTF-32 encoding (under the assumption that fromEnum on Char returns the Unicode code point of that character, which I think is true). UTF-32 is capable of describing every Unicode code point so this is indeed non-lossy. UTF-32 is a reasonable wire transfer format (if a bit inefficient!). Don't roll your own encoding logic though, System.IO provides a TextEncoding for UTF-32 you can use to do the job more reliably. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] OT: Monad co-tutorial: the Compilation Monad
On 17 December 2010 12:45, Larry Evans cppljev...@suddenlink.net wrote: Am I doing something wrong or has somehow community.haskell.org been hijacked somehow? Er, it works for me. Maybe *your* DNS has been hijacked? I know lots of Windows viruses play tricks like this... Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] OT: Monad co-tutorial: the Compilation Monad
On 17 December 2010 00:59, Gregg Reynolds d...@mobileink.com wrote: My real goal is to think about better language for software build systems, since what we have now is pretty weak, in my view. I can't speak for your monad based approach, but you may be interested in Neil Mitchell's Haskell DSL for build systems, called Shake: http://community.haskell.org/~ndm/downloads/slides-shake_a_better_make-01_oct_2010.pdf I have an open source implementation of it which has all the core functionality at http://github.com/batterseapower/openshake. (Warning: the code is quite ugly at the moment) Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Functor = Applicative = Monad
3, John Smith volderm...@hotmail.com wrote: I would like to formally propose that Monad become a subclass of Applicative I'm not in favour of this change because of the code breakage issue. However, I think that an acceptable plan (mentioned elsewhere on the list, but doesn't seem to have had any subsequent discussion) would be to introduce a GHC warning for such cases: Warning: you are declaring a Monad instance for MyMonad without a corresponding Applicative instance. This will be an error in GHC 7.2 Then in 7.2 you can remove the warning and change the class hierarchy. (Substitute 7.4 for 7.2 if you think we need more time for library authors to update, or if you add the warning only to the 7.2 series). Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Contexts for type family instances
On 12 December 2010 12:26, Stephen Tetley stephen.tet...@gmail.com wrote: type instance (DUnit a ~ DUnit b) = DUnit (a,b) = DUnit a Requires UndecidableInstances but should work: {-# LANGUAGE TypeFamilies #-} type family DUnit a :: * data Point u = P2 u u type instance DUnit (Point u) = u type instance DUnit (a,b) = GuardEq (DUnit a) (DUnit b) type family GuardEq a b :: * type instance GuardEq a a = a More realistically, you will have to write functions that produce/consume DUnit using type classes so you can pattern match on the a of DUnit a. You could just have all your instances for DUnit (a, b) require (DUnit a ~ DUnit b): class Consume a where consume :: DUnit a - Foo instance (DUnit a ~ DUnit b) = Consume (a, b) where consume a = undefined Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Fwd: Questions about lambda calculus
On 10 November 2010 12:49, Patrick LeBoutillier patrick.leboutill...@gmail.com wrote: I'm a total newbie with respect to the lambda calculus. I tried (naively) to port these examples to Haskell to play a bit with them: [snip] You can give type signatures to your first definitions like this: {-# LANGUAGE RankNTypes #-} type FBool = forall r. r - r - r fTRUE, fFALSE :: FBool fTRUE = \a - \b - a fFALSE = \a - \b - b fIF :: FBool - a - a - a fIF = \b - \x - \y - b x y type FPair a b = forall r. (a - b - r) - r fPAIR :: a - b - FPair a b fPAIR = \a - \b - \f - f a b fFIRST :: FPair a b - a fFIRST = \p - p (\a _ - a) -- Original won't type check since I gave fTRUE a restrictive type sig fSECOND :: FPair a b - b fSECOND = \p - p (\_ b - b) Your fADD doesn't type check because with those definitions e.g. fONE has a different type to fZERO. What you want is a system where natural numbers all have the same types. One way to do this is to represent the natural number n by the function that applies a functional argument n times. These are called Church numerals (http://en.wikipedia.org/wiki/Church_encoding): type FNat = forall r. (r - r) - r - r fZERO :: FNat fZERO = \_ - id fSUCC :: FNat - FNat fSUCC = \x f - f . x f fIS_ZERO :: FNat - FBool fIS_ZERO = \x - x (\_ - fFALSE) fTRUE fADD :: FNat - FNat - FNat fADD = \x y f - x f . y f fONE :: FNat fONE = fSUCC fZERO Try it out: main = do putStrLn $ showFNat $ fZERO putStrLn $ showFNat $ fONE putStrLn $ showFBool $ fIS_ZERO fZERO putStrLn $ showFBool $ fIS_ZERO fONE putStrLn $ showFNat $ fADD fONE fONE Exercise: define fPRED in this system :-) Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Combining wl-pprint and ByteString?
On 9 November 2010 08:01, Mark Spezzano mark.spezz...@chariot.net.au wrote: I want to be able to run it from the command line in a terminal window, and have the text come up in colours (but very fast). My current version is already very fast, but I've heard everyone raving about how slow Strings were to use for I/O, so I wanted to _experiment_ with ByteStrings to see the performance difference. I very much doubt that you will see a difference in performance between String and Text for the amounts of data that you can fit on a terminal window. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Scrap your rolls/unrolls
On 2 November 2010 14:10, Bertram Felgenhauer bertram.felgenha...@googlemail.com wrote: Indeed. I had a lot of fun with the ideas of this thread, extending the 'Force' type family (which I call 'Eval' below) to a small EDSL on the type level: I also came up with this.. I was trying to use it to get two type classes for (Monoid Int) like this, but it doesn't work: {-# LANGUAGE TypeFamilies, EmptyDataDecls, ScopedTypeVariables, TypeOperators, FlexibleInstances, FlexibleContexts #-} import Data.Monoid -- Type family for evaluators on types type family E a :: * -- Tag for functor application: fundamental to our approach infixr 9 :% data f :% a -- Tags for evalutor-style data declarations: such declarations contain internal -- occurrences of E, so we can delay evaluation of their arguments data P0T (f :: *) type instance E (P0T f) = f data P1T (f :: * - *) type instance E (P1T f :% a) = f a data P2T (f :: * - * - *) type instance E (P2T f :% a :% b) = f a b data P3T (f :: * - * - * - *) type instance E (P3T f :% a :% b :% c) = f a b c -- When applying legacy data types we have to manually force the arguments: data FunT type instance E (FunT :% a :% b) = E a - E b data Tup2T type instance E (Tup2T :% a :% b) = (E a, E b) data Tup3T type instance E (Tup3T :% a :% b :% c) = (E a, E b, E c) -- Evalutor-style versions of some type classes class FunctorT f where fmapT :: (E a - E b) - E (f :% a) - E (f :% b) class MonoidT a where memptyT :: E a mappendT :: E a - E a - E a data AdditiveIntT type instance E AdditiveIntT = Int instance MonoidT AdditiveIntT where memptyT = 0 mappendT = (+) data MultiplicativeIntT type instance E MultiplicativeIntT = Int instance MonoidT MultiplicativeIntT where memptyT = 1 mappendT = (*) -- Make the default instance of Monoid be additive: instance MonoidT (P0T Int) where memptyT = memptyT :: E AdditiveIntT mappendT = mappendT :: E AdditiveIntT - E AdditiveIntT - E AdditiveIntT main = do print (result :: E (P0T Int)) print (result :: E MultiplicativeIntT) where result :: forall a. (E a ~ Int, MonoidT a) = E a result = memptyT `mappendT` 2 `mappendT` 3 The reason it doesn't work is clear: it is impossible to tell GHC which MonoidT Int dictionary you intend to use, since E is not injective! I think this is a fundamental flaw in the scheme. The implementation is somewhat verbose, but quite straight-forward tree manipulation. This is impressively mad. You can eliminate phase 1 by introducing the identity code (at least, it typechecks): data Id type instance Eval (App Id a) = Eval a type instance Eval (Fix a) = Eval (Fix1 a Id) data Fix1 a b -- compositions, phase 2.: build left-associative composition type instance Eval (Fix1 (a :.: (b :.: c)) d) = Eval (Fix1 ((a :.: b) :.: c) d) type instance Eval (Fix1 (a :.: Id) c) = Eval (Fix1 a c) type instance Eval (Fix1 (a :.: ConE b) c) = Eval (Fix1 a (ConE b :.: c)) type instance Eval (Fix1 (a :.: Con1 b) c) = Eval (Fix1 a (Con1 b :.: c)) -- compositions, final step: apply first element to fixpoint of shifted cycle type instance Eval (Fix1 Id b) = Eval (Fix b) type instance Eval (Fix1 (ConE a) b) = a (Fix (b :.: ConE a)) type instance Eval (Fix1 (Con1 a) b) = a (Eval (Fix (b :.: Con1 a))) I'm not sure whether this formulation is clearer or not. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Is let special?
2010/11/2 Günther Schmidt gue.schm...@web.de: I've been given a few hints over time when I asked question concerning DSLs but regretfully didn't follow them up. I think this is probably to do with observable sharing. The problem in DSLs is if you have: fact :: Term - Term fact = Factorial instance Num Term where fromIntegral = Int (+) = Plus e = let x = fact 2 :: Term in x + x :: Term (Lets say the terms of our deeply embedded DSL are of type Term). If you pattern match on e you will get something like (Factorial (Int 2)) `Plus` (Factorial (Int 2)). This has lost the work sharing implicit in the original term, where the Factorial computation was shared. To recover this sharing, you either need some sort of observable sharing, or some common subexpression elimination (which risks introducing space leaks if your DSL has lazy semantics). Quite a lot of papers have mentioned this issue: the one that springs to mind is Gill's Type Safe Observable Sharing. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Scrap your rolls/unrolls
On 23 October 2010 15:32, Sjoerd Visscher sjo...@w3future.com wrote: A little prettier (the cata detour wasn't needed after all): data IdThunk a type instance Force (IdThunk a) = a Yes, this IdThunk is key - in my own implementation I called this Forced, so: type instance Force (Forced a) = a You can generalise this trick to abstract over type functions, though I haven't had a need to do this yet. Let us suppose you had these definitions: {-# LANGUAGE TypeFamilies, EmptyDataDecls #-} type family Foo a :: * type instance Foo Int = Bool type instance Foo Bool = Int type family Bar a :: * type instance Bar Int = Char type instance Bar Bool = Char Now you want to build a data type where you have abstracted over the particular type family to apply: data GenericContainer f_clo = GC (f_clo Int) (f_clo Bool) type FooContainer = GenericContainer Foo type BarContainer = GenericContainer Bar This is a very natural thing to do, but it is rejected by GHC because Foo and Bar are partially applied in FooContainer and BarContainer. You can work around this by eta expanding Foo/Bar using a newtype, but it is more elegant to scrap your newtype injections/extractions with the trick: data FooClosure data BarClosure type family Apply f a :: * type instance Apply FooClosure a = Foo a type instance Apply BarClosure a = Bar a data GenericContainer f_clo = GC (Apply f_clo Int) (Apply f_clo Bool) type FooContainer = GenericContainer FooClosure type BarContainer = GenericContainer BarClosure These definitions typecheck perfectly: valid1 :: FooContainer valid1 = GC True 1 valid2 :: BarContainer valid2 = GC 'a' 'b' With type functions, Haskell finally type-level lambda of a sort :-) Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)
On 19 October 2010 19:01, Dan Doel dan.d...@gmail.com wrote: However, this argument is a bit circular, since that eliminator could be defined to behave similarly to an irrefutable match. Right. Or, put another way, eta expansion of a dictionary-holding existential would result in a value holding a bottom dictionary, whereas that's otherwise impossible, I think. I think evaluating dictionaries strictly is more of a want to have rather than actually implemented. In particular, GHC supports building value-recursive dictionaries - and making dictionary arguments strict indiscriminately would make this feature rather useless. I think this means that even with constrained existential things would be OK. What is definitely not OK is lazy pattern matching on GADTs which have *type equality* constraints on their existentials - because the type equalities are erased at compile time, lazy pattern matching on a GADT would leave no opportunity for us to observe that the proof of the type equality was bogus (a bottom) - so allowing this would let our programs segfault. For example, this program would erroneously typecheck: data Eq a b where Eq :: EQ a a f :: Eq Int Bool - Bool f (~Eq) = ((1 :: Int) + 1) :: Bool main = print $ f undefined I'm sticking with my unsafeCoerce based solution for now. It's ugly, but it's nicely *localised* ugliness :-) Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Scrap your rolls/unrolls
Hi Cafe, In generic programming, e.g. as in Data Types a la Carte and Compos, you often wish to work with data types in their fix-point of a functor form. For example: data ListF a rec = Nil | Cons a rec newtype List a = Roll { unroll :: ListF a (List a) } In some presentations, this is done by factoring the fix point like so: newtype FixF f = Roll { unroll :: f (FixF f) } type List a = FixF (ListF a) This is all well and good, but it means when working with data types defined in this manner you have to write Roll and unroll everywhere. This is tedious :-( Something I have long wished for is equirecursion, which would let you write this instead: type List a = ListF a (List a) Note that we now have a type instead of a newtype, so we won't need to write any injection/extraction operators. However, currently GHC rightly rejects this with a complaint about a cycle in type synonym declarations. However, you can use type families to hide the cycle from GHC: type List a = ListF a (Force (ListThunk a)) data ListThunk a type family Force a type instance Force (ListThunk a) = List a Unfortunately, this requires UndecidableInstances (the type instance RHS is not smaller than the instance head). And indeed when we turn that extension on, GHC goes into a loop, so this may not seem very useful. However, we can make this slight modification to the data type to make things work again: data ListF a rec = Nil | Cons a (Force rec) type List a = ListF a (ListThunk a) Note that the application of Force has moved into the *use site* of rec rather than the *application site*. This now requires no extensions other than TypeFamilies, and the client code of the library is beautiful (i.e. has no rolls/unrolls): example, ones :: List Int example = Cons 1 (Cons 2 Nil) ones = Cons 1 ones We can factor this trick into a fix point combinator that does not require roll/unroll: type Fix f = f (FixThunk f) type List a = Fix (ListF a) data FixThunk f type family Force a type instance Force (FixThunk f) = Fix f The annoying part of this exercise is the the presence of a Force in the functor definition (e.g ListF) means that you can't make them into actual Functor instances! The fmap definition gives you a function of type (a - b) and you need one of type (Force a - Force b). However, you can make them into a category-extras:Control.Functor.QFunctor instance (http://hackage.haskell.org/packages/archive/category-extras/0.53.5/doc/html/Control-Functor.html) by choosing the s Category to be: newtype ForceCat a b = ForceCat { unForceCat :: Force a - Force b } instance Category ForceCat where id = ForceCat id ForceCat f . ForceCat g = ForceCat (f . g) Rather than the usual Hask Category which is used in the standard Functor. This is somewhat unsatisfying, but I'm not sure there is a better way. I haven't seen this trick to emulate equirecursion before. Has it showed up anywhere else? Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)
On 22 October 2010 12:03, Dan Doel dan.d...@gmail.com wrote: data Mu f = In { out :: f (Mu f) } instance Show (f (Mu f)) = Show (Mu f) where show = show . out Is that an example of a value recursive dictionary? Assuming the Show (f (Mu f)) instance uses the (Mu f) one, AFAIK this should indeed build a loopy dictionary. I think this extension was motivated by Scrap your Boilerplate with Class - see section 5 of http://research.microsoft.com/en-us/um/people/simonpj/papers/hmap/gmap3.pdf. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Scrap your rolls/unrolls
Forgot to reply to list On 22 October 2010 12:14, Dan Doel dan.d...@gmail.com wrote: Another solution, though, is SHE. With it, you can write: data ListF a r = NilF | ConsF a r newtype List a = Roll (ListF a (List a)) pattern Nil = Roll NilF pattern Cons x xs = Roll (ConsF x xs) And not worry about Rolls anymore. Ah yes, pattern synonyms. This solution is somewhat unsatisfying because you will also need some smart constructors: nil = Roll NilF cons x xs = Roll (ConsF x xs) Now the names of the smart constructors for building the data type are not the same as the things you pattern match on, which is a slightly annoying asymmetry. It's certainly less complicated than the TypeFamilies based solution though! Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Scrap your rolls/unrolls
On 22 October 2010 22:06, Dan Doel dan.d...@gmail.com wrote: SHE's pattern synonyms also work as expressions, so there's no asymmetry. They work just like constructors as far as I can tell (with what little I've played with them); you can even partially apply them (in expressions). I didn't realise that - pretty cool! Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Re: Eta-expansion and existentials (or: types destroy my laziness)
Hi Oleg, Thanks for your reply! Really? Here are two 'cons' that seem to satisfy all the criteria Thanks - your definitions are similar to Roman's suggestion. Unfortunately my criteria 3) is not quite what I actually wanted - I really wanted something GHC-optimisable - (so non-recursive definitions are a necessary but not sufficient condition). The problem is that I'd like to do the static argument transformation on the Stream argument to cons so that GHC can optimise it properly. This is why I've made my cons pattern match on str directly, so the local run/step loop can refer to the lexically bound step/state fields of the stream being consed on to. As Roman suggests, the best way to get GHC to optimise these compositions would be to do something like extend GHC so it can do the SAT by itself :-). Alternatively, a type-safe eta for data types involving existentials would let me do what I want without GHC changes - but I don't think there is a way to define this. Finally, if one doesn't like existentials, one can try universals: http://okmij.org/ftp/Algorithms.html#zip-folds http://okmij.org/ftp/Haskell/zip-folds.lhs I hadn't seen this - thanks! It is certainly a neat trick. I can't quite see how to use it to eliminate the existential I'm actually interested eta-expanding without causing work duplication, but I'll keep thinking about it. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Strict Core?
Hi Gregory, On 15 October 2010 22:27, Gregory Crosswhite gcr...@phys.washington.edu wrote: Out of curiosity, are there any plans for GHC to eventually use the Strict Core language described in http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf? I do not have plans to add it. I think it would be worth it - perhaps worth a few % points in runtime - but I've started researching supercompilation instead, which has more impressive effects :-) Simon has said he is keen to use it though - it's just a big engineering task to replumb GHC to use it, so perhaps this is a project for an enterprising student. That said, I've been told that UHC's core language uses the ideas from Strict Core, and they have/had a student at Utretch (Tom Lokhorst) who was working on implementing optimisations like arity raising and deep unboxing for the language. Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
[Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)
Hi Cafe, I've run across a problem with my use of existential data types, whereby programs using them are forced to become too strict, and I'm looking for possible solutions to the problem. I'm going to explain what I mean by using a literate Haskell program. First, the preliminaries: {-# LANGUAGE ExistentialQuantification #-} import Control.Arrow (second) import Unsafe.Coerce Let's start with a simple example of an existential data type: data Stream a = forall s. Stream s (s - Maybe (a, s)) This is a simplified version of the type of streams from the stream fusion paper by Coutts et al. The idea is that if you have a stream, you have some initial state s, which you can feed to a step function. The step function either says Stop! The stream has ended!, or yields an element of the stream and an updated state. The use of an existential quantifier is essential to this code, because it means that we can write most functions on Streams in a non-recursive fashion. This in turn makes them amenable to inlining and simplification by GHC, which gives the us the loop fusion we need. An example stream is that of natural numbers: nats :: Stream Int nats = Stream 0 (\n - Just (n, n + 1)) Here, the state is just the next natural number to output. We can also build the list constructors as functions on streams: nil :: Stream a nil = Stream () (const Nothing) cons :: a - Stream a - Stream a cons a (Stream s step) = Stream Nothing (maybe (Just (a, Just s)) (fmap (second Just) . step)) List functions can also easily be expressed: taken :: Int - Stream a - Stream a taken n (Stream s step) = Stream (n, s) (\(n, s) - if n = 0 then Nothing else maybe Nothing (\(a, s) - Just (a, (n - 1, s))) (step s)) To see this all in action, we will need a Show instance for streams. Note how this code implements a loop where it repeatedly steps the stream (starting from the initial state): instance Show a = Show (Stream a) where showsPrec _ (Stream s step) k = '[' : go s where go s = maybe (']' : k) (\(a, s) - shows a . showString , $ go s) (step s) We can now run code like this: test1 = do print $ (nil :: Stream Int)-- [] print $ cons 1 nil -- [1, ] print $ taken 1 $ cons 1 $ cons 2 nil -- [1, ] Now we may wish to define infinite streams using value recursion: ones :: Stream Int ones = cons 1 ones Unfortunately, 'ones' is just _|_! The reason is that cons is strict in its second argument. The problem I have is that there is no way to define cons which is simultaneously: 1. Lazy in the tail of the list 2. Type safe 3. Non-recursive If you relax any of these constraints it becomes possible. For example, if we don't care about using only non-recursive functions we can get the cons we want by taking a roundtrip through the skolemized (existiental-eliminated - see http://okmij.org/ftp/Computation/Existentials.html) version of streams: data StreamSK a = StreamSK (Maybe (a, StreamSK a)) skolemize :: Stream a - StreamSK a skolemize (Stream s step) = StreamSK (fmap (\(a, s') - (a, skolemize (Stream s' step))) $ step s) unskolemize :: StreamSK a - Stream a unskolemize streamsk = Stream streamsk (\(StreamSK next) - next) instance Show a = Show (StreamSK a) where showsPrec _ (StreamSK mb_next) k = maybe (']' : k) (\(a, ssk) - shows a . showString , $ shows ssk k) mb_next Now we can define: cons_internally_recursive x stream = cons x (unskolemize (skolemize stream)) This works because unskolemize (skolemize stream) != _|_ even if stream is bottom. However, this is a non-starter because GHC isn't able to fuse together the (recursive) skolemize function with any consumer of it (e.g. unskolemize). In fact, to define a correct cons it would be sufficient to have some function (eta :: Stream a - Stream a) such that (eta s) has the same semantics as s, except that eta s /= _|_ for any s. I call this function eta because it corresponds to classical eta expansion. We can define a type class for such operations with a number of interesting instances: class Eta a where -- eta a /= _|_ eta :: a - a instance Eta (a, b) where eta ~(a, b) = (a, b) instance Eta (a - b) where eta f = \x - f x instance Eta (StreamSK a) where eta ~(StreamSK a) = StreamSK a If we had an instance for Eta (Stream a) we could define a lazy cons function: cons_lazy :: a - Stream a - Stream a cons_lazy x stream = cons x (eta stream) As we have already seen, one candidate instance is that where eta = unskolemize . skolemize, but I've already ruled that possibility out. Given that constraint, the only option that I see is this: instance Eta (Stream a) where -- Doesn't type check, even though it can't go wrong: --eta stream = Stream (case stream of Stream s _ - s) (case stream of Stream _ step - step) eta stream = Stream (case stream of Stream s _ - unsafeCoerce s :: ()) (case stream of Stream _ step -
Re: [Haskell-cafe] Eta-expansion and existentials (or: types destroy my laziness)
On 16 October 2010 12:16, Roman Leshchinskiy r...@cse.unsw.edu.au wrote: eta :: Stream a - Stream a eta s = Stream s next where next (Stream s next') = case next' s of Just (x,s') - Just (x,Stream s' next') Nothing - Nothing Making GHC optimise stream code involving eta properly is hard :-) Good point, I don't exactly mean non-recursive for requirement 3) then - I mean an adjective with a fuzzier definition like GHC-optimisable :-) Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Client-extensible heterogeneous types (Duck-typed variadic functions?)
On 14 October 2010 08:34, Jacek Generowicz jacek.generow...@cern.ch wrote: Those other data might be the functions' arguments, or they might be other functions with which they are to be combined, or both. You can represent these as existential packages. However, as Oleg shows you can always use skolemisation to eliminate the existential: http://okmij.org/ftp/Computation/Existentials.html This trick is basically what Brandon and Evan pointed out earlier when they suggested you replace the list :: [exists b. (b - a, b)] with a list :: [a]. Here's an example where lazy evaluation isn't enough: def memoize(fn): cache = {} def memoized_fn(*args): if args not in cache: cache[args] = fn(*args) return cache[args] return memoized_fn You memoize a function once, but it will be given different arguments, many times, at a later time. I'm not sure why you would use existentials for this. Isn't the type of memoized_fn just :: Ord a = (a - b) - a - b? This doesn't deal with argument *lists* so you may have to curry/uncurry to get functions of a different arity to go through, but that is IMHO a reasonable requirement for Haskell, where multi-argument functions do not have special status. (In the absence of side effects, I can't see an obvious way to implement it without some way to enumerate the domain a though. Conal Elliot uses type classes to solve this issue, see http://hackage.haskell.org/package/MemoTrie, where memo :: HasTrie t = (t - a) - t - a). will do. So please allow me to store (fnA1, fnA2) and (fnB1, fnB2) in the same place. The program can tell that it can combine them with (.) because the type of But if the only operation you ever do on this pair is (.), you may as well skolemise and just store (fnA1 . fnA2) directly. What is the advantage of doing otherwise? Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Client-extensible heterogeneous types (Duck-typed variadic functions?)
On 14 October 2010 08:56, Max Bolingbroke batterseapo...@hotmail.com wrote: But if the only operation you ever do on this pair is (.), you may as well skolemise and just store (fnA1 . fnA2) directly. What is the advantage of doing otherwise? I forgot to mention that if you *really really* want to program with the type [exists b. (b - a, b)] directly you can do it without defining a new data type to hold the existential package by using CPS style and making use of the logical law that not(exists a. P[a]) == forall a. not(P[a]): {-# LANGUAGE Rank2Types, ImpredicativeTypes #-} foo :: [forall res. (forall b. (b - Bool, b) - res) - res] foo = [\k - k (not, True), \k - k ((10), 5), \k - k (uncurry (==), (Hi, Hi))] main :: IO () main = print $ [k (\(f, x) - f x) | k - foo] I pass to each k in the foo list a continuation that consumes that item in the list (in this case, a function and its arguments) and returns a result of uniform type (in this case, Bool). Cheers, Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Finite but not fixed length...
On 13 October 2010 08:57, Jason Dusek jason.du...@gmail.com wrote: Is there a way to write a Haskell data structure that is necessarily only one or two or seventeen items long; but that is nonetheless statically guaranteed to be of finite length? Maybe you want a list whose denotation is formed by a least (rather than a greatest) fixed point? i.e. the type of spine-strict lists: data FinList a = Nil | Cons a !(FinList a) deriving (Show) ones_fin = 1 `Cons` ones_fin take_fin n Nil = Nil take_fin n (Cons x rest) | n = 0= Nil | otherwise = Cons x (take_fin (n - 1) rest) ones = 1 : ones main = do print $ take 5 ones print $ take_fin 5 ones_fin If you have e :: FinList a then if e /= _|_ it is necessarily of finite length. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Re: [Haskell-cafe] Re: Eta-expansion destroys memoization?
On 8 October 2010 10:54, Simon Marlow marlo...@gmail.com wrote: We could make GHC respect the report, but we'd have to use (e op) == let z = e in \x - z op x to retain sharing without relying on full laziness. This might be a good idea in fact - all other things being equal, having lambdas be more visible to the compiler is a good thing. Of course, this change could cause a performance regression to existing code. Personally, I think that this is a report bug: there is no syntactic lambda in (`op` e) or (`op` e) so I would expect the evaluation of e to be shared. Max ___ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe