Re: Observation on Hadrian's relative performance re current buildsystem
On 17 Nov 2017, at 18:46, Ben Gamari wrote: > Simon Peyton Jones via ghc-devswrites: > >> Urk! I expected Hadrian to be faster because it has more accurate >> dependencies. >> > While this is just speculation, this might actually be one of the > reasons why the no-op case is slower: in make's case the dependency > graph is mostly static whereas in Hadrian the build system needs to > discover dependencies dynamically. > > Either way, thanks for this characterization, Herbert! But surely the timing for a full build from scratch is not the most important thing to compare? In my work environment, full builds are extremely rare; the common case is an incremental build after pulling changes from upstream. Is this something you can measure? Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Request for feedback: deriving strategies syntax
On 18 Aug 2016, at 06:34, Bardur Arantsson wrote: > Not a native (British) English speaker, but I've consumed a *lot* of UK > media over the last ~25-30 years and I can literally only recall having > heard "bespoke" used *once* and that was in the term "bespoke suit" > where you can sort-of guess its meaning from context. I believe this is > also the only context in which it's actually really used in British > English. (However, I'll let the native (British) English speakers chime > in on that.) "Bespoke" is a reasonably common British English word, used in all of the following phrases: bespoke software bespoke solution bespoke furniture bespoke kitchen bespoke tailoring The meaning is "specially and individually made for this client". The opposite of standard, off-the-shelf, pre-packaged. It implies the outcome was not automatable, even if the individual pieces being assembled were machine-cut. "In the U.S., bespoke software is often called custom or custom-designed software." http://whatis.techtarget.com/definition/bespoke Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: cpphs: can't be used to build GHC 7.10.3. Bug?
Thanks for the bug report, and for the detailed analysis. I will try to look at and fix this soon. Regards, Malcolm On 10 Jan 2016, at 20:09, Alain O'Dea wrote: > Got a clear answer about the handling of if defined. > > Expanding macros within if defined is non-compliant if cpphs is trying to be > a C99 preprocessor: > http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf > 6.10.1/1 Conditional Inclusion pg 148 indicates that the token after defined > or within defined ( ) is an identifier, not a macro to be expanded. > > I'm not sure what's involved in fixing this behavior in cpphs, but I'm happy > to test fixes. > > On Sun, Jan 10, 2016 at 3:53 PM Alain O'Deawrote: > I've isolated the issue to the handling of if defined on multi-argument > macros. > > I took a crack at interpreting the cpphs source for this and I think it may > be a bug in the conversion of defined expressions here in > Language.Preprocessor.CppIfdef here: > > convert "defined" [arg] = > case lookupST arg st of > Nothing | all isDigit arg-> return arg > Nothing -> return "0" > Just (a@AntiDefined{}) -> return "0" > Just (a@SymbolReplacement{}) -> return "1" > Just (a@MacroExpansion{})-> return "1" > > It looks like it will macro expand the contents of a defined expression which > isn't what GCC does. I don't know if GCC is wrong or if using parameterized > macros within > > if defined works on single-argument macros. > > working1.hs: > > {-# LANGUAGE CPP #-} > > #define EXAMPLE_MACRO(arg) (\ >arg) > > #if defined(EXAMPLE_MACRO) > #endif > > preprocess it (it works!): > > $ cpphs --cpp working1.hs -o $tempfile > $ > > ifdef works on multiple-argument macros. > > working2.hs: > > {-# LANGUAGE CPP #-} > > #define EXAMPLE_MACRO(arg1,arg2) (\ >arg1 > arg2) > > #ifdef EXAMPLE_MACRO > #endif > > preprocess it (it works!): > > $ cpphs --cpp working2.hs -o $tempfile > $ > > if defined fails on multi-argument macros. > > broken2.hs: > > {-# LANGUAGE CPP #-} > > #define EXAMPLE_MACRO(arg1,arg2) (\ >arg1 > arg2) > > #if defined(EXAMPLE_MACRO) > #endif > > preprocess it (it fails!): > > $ cpphs --cpp broken2.hs -o $tempfile > cpphs: macro EXAMPLE_MACRO expected 2 arguments, but was given 0 > $ > > I've posted a StackOverflow question to see if any of them know if this is > undefined behavior: > http://stackoverflow.com/questions/34709769/is-cpphs-wrong-or-is-the-behavior-of-macros-with-arguments-in-if-defined-express > > If it is undefined behavior we should stop relying on it in GHC sources. > Either way the behavior is inconsistent with GCC which complicates things. > > Best, > Alain > > On Sun, Jan 10, 2016 at 2:04 PM Alain O'Dea wrote: > Hi Malcolm: > > cpphs is under consideration as a replacement for GCC's C preprocessor in the > GHC toolchain: > https://ghc.haskell.org/trac/ghc/wiki/Proposal/NativeCpp > > GHC 7.10.3's build fails when cpphs is used as the C preprocessor > (--with-hs-cpp=cpphs --with-hs-cpp-flags="--cpp"). > > It runs into this error when preprocessing libraries/base/GHC/Natural.hs: > > cpphs: macro MIN_VERSION_integer_gmp expected 3 arguments, but was given 0 > > I've reproduced this issue on Ubuntu 14.04 x86-64 and SmartOS 15.3.0 x86-64. > > Interestingly the error seems to arise only when preprocessing Natural.hs > while the autogenerated cabal-macros.h is present. Removing that include > from the cpphs flags leads to a clean preprocessing run. > > I have more details of this investigation here: > https://gist.github.com/AlainODea/bd5b3e0e6f7c4227f009 > > Is this a bug? > > Best, > Alain ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Making compilation results deterministic (#4012)
On 16 Sep 2015, at 10:43, Simon Peyton Jones wrote: > I’ll hypothesise that you mean > · In –make mode, with unchanged source I want to get the same output > from compiling M.hs if M’s imported interface files have not changed. > But even then I’m confused. Under those circumstances, why are we > recompiling at all? My understanding is that currently, if you build a Haskell project from clean sources with ghc --make, then wipe all the .o/.hi files, and rebuild again from clean, with all the same flags and environment, you are unlikely to end up with identical binaries for either the .o or .hi files. This lack of binary reproducibility is a performance problem within a larger build system (of which the Haskell components are only a part): if the larger build system sees that the Haskell .hi or .o has apparently changed (even though the sources+flags have not changed at all), then many other components that depend on them may be triggered for an unnecessary rebuild. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: MonadFail proposal (MFP): Moving fail out of Monad
I'm a +1 on this proposal as well. In our private Haskell compiler at work, we have had separate Monad and MonadFail classes since 2010, and it is clearly the more principled way to handle partiality: make it visible in the inferred types. I found that there were very few instances when porting Hackage libraries to our compiler that we came across a need to change type signatures because of MonadFail, and making the change was in all cases easy anyway. Regards, Malcolm On 9 Jun 2015, at 23:19, Edward Kmett wrote: +1 from me for both the spirit and the substance of this proposal. We've been talking about this in the abstract for a while now (since ICFP 2013 or so) and as concrete plans go, this strikes me as straightforward and implementable. -Edward On Tue, Jun 9, 2015 at 10:43 PM, David Luposchainsky dluposchain...@googlemail.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hello *, the subject says it all. After we successfully put `=` into Monad, it is time to remove something in return: `fail`. Like with the AMP, I wrote up the proposal in Markdown format on Github, which you can find below as a URL, and in verbatim copy at the end of this email. It provides an overview over the intended outcome, which design decisions we had to take, and how our initial plan for the transition looks like. There are also some issues left open to discussion. https://github.com/quchen/articles/blob/master/monad_fail.md Here's a short abstract: - - Move `fail` from `Monad` into a new class `MonadFail`. - - Code using failable patterns will receive a more restrictive `MonadFail` constraint. Code without this constraint will be safe to use for all Monads. - - Transition will take at least two GHC releases. GHC 7.12 will include the new class, and generate warnings asking users to make their failable patterns compliant. - - Stackage showed an upper bound of less than 500 breaking code fragments when compiled with the new desugaring. For more details, refer to the link or the paste at the end. Let's get going! David aka quchen === === === `MonadFail` proposal (MFP) == A couple of years ago, we proposed to make `Applicative` a superclass of `Monad`, which successfully killed the single most ugly thing in Haskell as of GHC 7.10. Now, it's time to tackle the other major issue with `Monad`: `fail` being a part of it. You can contact me as usual via IRC/Freenode as *quchen*, or by email to *dluposchainsky at the email service of Google*. This file will also be posted on the ghc-devs@ and libraries@ mailing lists, as well as on Reddit. Overview - - - **The problem** - reason for the proposal - - **MonadFail class** - the solution - - **Discussion** - explaining our design choices - - **Adapting old code** - how to prepare current code to transition smoothly - - **Esimating the breakage** - how much stuff we will break (spoiler: not much) - - **Transitional strategy** - how to break as little as possible while transitioning - - **Current status** The problem - --- Currently, the `-` symbol is unconditionally desugared as follows: ```haskell do pat - computation let f pat = more moref _ = fail ... in computation = f ``` The problem with this is that `fail` cannot (!) be sensibly implemented for many monads, for example `State`, `IO`, `Reader`. In those cases it defaults to `error`. As a consequence, in current Haskell, you can not use `Monad`-polymorphic code safely, because although it claims to work for all `Monad`s, it might just crash on you. This kind of implicit non-totality baked into the class is *terrible*. The goal of this proposal is adding the `fail` only when necessary and reflecting that in the type signature of the `do` block, so that it can be used safely, and more importantly, is guaranteed not to be used if the type signature does not say so. `MonadFail` class - - To fix this, introduce a new typeclass: ```haskell class Monad m = MonadFail m where fail :: String - m a ``` Desugaring can now be changed to produce this constraint when necessary. For this, we have to decide when a pattern match can not fail; if this is the case, we can omit inserting the `fail` call. The most trivial examples of unfailable patterns are of course those that match anywhere unconditionally, ```haskell do x - action let f x = more more in action = f ``` In particular, the programmer can assert any pattern be unfailable by making it irrefutable using a prefix tilde:
Re: [Haskell-cafe] RFC: Native -XCPP Proposal
On 24 May 2015, at 14:15, Roman Cheplyaka wrote: On 21/05/15 19:07, Herbert Valerio Riedel wrote: Don't you still have to support -pgmF? I guess so, unfortunately... so we'd have to keep a legacy code-path for external cpp processing around, at least in the short run... It's not just about legacy; -pgmF is used for all sorts of awesome things; literate markdown is one example. I think Herbert meant that -pgmP will also need to continue to be supported. Regards Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: [Haskell-cafe] RFC: Native -XCPP Proposal
Interesting. I'm not completely clear, when you say that your company distributes binaries to third-parties: do you distribute ghc itself? Or just the product that has been built by ghc? Regards, Malcolm On 21 May 2015, at 10:16, Yitzchak Gale wrote: LGPL is well-known and non-acceptable here. Show me some serious case law for Malcolm's customized LGPL and we can start talking. Other than that, explanations are not going to be helpful. Thanks, Yitz On Thu, May 21, 2015 at 4:51 AM, Howard B. Golden howard_b_gol...@yahoo.com wrote: Hi Yitzchak, I believe there are good explanations of open source licenses aimed at lawyers and management. I don't think their fears are well-founded. If you work for a timid company that isn't willing to learn, you should consider going elsewhere. You may be happier in the long run. Respectfully, Howard On May 20, 2015, at 7:39 AM, Yitzchak Gale g...@sefer.org wrote: The license issue is a real concern for any company using GHC to develop a product whose binaries they distribute to customers. And it is concern for GHC itself, if we want GHC to continue to be viewed as a candidate for use in industry. The real issue is not whether you can explain why this license is OK, or whether anyone is actually going to the trouble of building GHC without GMP. The issue is the risk of a *potential* legal issue and its potential disastrous cost as *perceived* by lawyers and management. A potential future engineering cost, no matter how large and even if only marginally practical, is perceived as manageable and controllable, whereas a poorly understood potential future legal threat is perceived as an existential risk to the entire company. With GMP, we do have an engineering workaround to side-step the legal problem entirely if needed. Whereas if cpphs were to be linked into GHC with its current license, I would be ethically obligated to report it to my superiors, and the response might very well be: Then never mind, let's do the simple and safe thing and just rewrite all of our applications in Java or C#. Keeping the license as is seems to be important to Malcolm. So could we have an option to build GHC without cpphs and instead use it as a stand-alone external program? That would make the situation no worse than GMP. Thanks, Yitz ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: [Haskell-cafe] RFC: Native -XCPP Proposal
On 21 May 2015, at 15:54, Bardur Arantsson wrote: fork/exec is almost certainly going to be negligable compared to the overall compile time anyway. It's not like GHC is fast enough for it to matter. Don't count on it. On our Windows desktop machines, fork/exec costs approximately one third of a second, instead of the expected small number of milliseconds or less. The reasons are unknown, but we suspect a misconfigured anti-virus scanner (and for various company policy reasons we are prohibited from doing the investigation that could confirm or deny this hypothesis). This means that when ghc --make does lots of external things requiring a fork, such as preprocessing, a medium sized project (using many library packages) can take a surprisingly large amount of time (minutes instead of seconds), even for an incremental build where very little code has changed. We think an in-process cpphs could make some of our compilations literally hundreds of times faster. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: Non-deterministic package IDs are really bad in 7.10
I'd like to add a +1 to getting this fixed. It bites us at work, where exact binary reproducibility of builds is strongly desirable. Regards, Malcolm On 14 May 2015, at 17:38, George Colpitts wrote: Currently the priority of #4012 is normal, shouldn't it be at least high? Also the milestone is 7.12.1, should it be 7.10.2? On Sun, May 10, 2015 at 3:39 PM, Mateusz Kowalczyk fuuze...@fuuzetsu.co.uk wrote: Hi, I'd like to bring some attention to ticket #4012 about non-determinism. As many of you may know, the nix package manager distributes binaries throughout its binary caches. The binaries are shared as long as the hash of some of their inputs matches: this means that we can end up with two of the same hashes of inputs but thanks to #4012 means that the actual contents can differ. You end up with machines with some packages built locally and some elsewhere and due to non-determinism, the GHC package IDs don't line up and everything is broken. The situation was pretty bad in 7.8.4 in presence of parallel builds so we switched those off. Joachim's a477e8118137b7483d0a7680c1fd337a007a023b helped a great deal there and we were hopeful for 7.10. Now that 7.10.1 is out and people have been using and testing it, the situation turns out to be really bad: daily we get multiple reports from people about their packages ending up broken and our only advice is to do what we did back in 7.8 days which is to purge GHC and rebuild everything locally or fetch everything from a machine that already built it all, as long as the two don't mix. This is not really acceptable. See comment 76 on #4012 for an example of a rather simple file you can compile right now with nothing extra but -O and get different interface hash. This e-mail is just to raise awareness that there is a serious problem. If people are thinking about cutting 7.10.2 or whatever, I would consider part of this ticket to be a blocker as it makes using GHC reliably while benefitting from binary caches pretty much impossible. Of course there's the ‘why don't you fix it yourself’ question. I certainly plan to but do not have time for a couple more weeks to do so. For all I know right now, the fix to comment 76 might be easy and someone looking for something to hack on might have the time to get to it before I do. Thanks -- Mateusz K. ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: [Haskell-cafe] RFC: Native -XCPP Proposal
Exactly. My post was an attempt to elicit response from anyone to whom it matters. There is no point in worrying about hypothetical licensing problems - let's hear about the real ones. Regards, Malcolm On 7 May 2015, at 22:15, Tomas Carnecky wrote: That doesn't mean those people don't exist. Maybe they do but are too afraid to speak up (due to corporate policy or whatever). On Thu, May 7, 2015 at 10:41 PM, Malcolm Wallace malcolm.wall...@me.com wrote: I also note that in this discussion, so far not a single person has said that the cpphs licence would actually be a problem for them. Regards, Malcolm On 7 May 2015, at 20:54, Herbert Valerio Riedel wrote: On 2015-05-06 at 13:38:16 +0200, Jan Stolarek wrote: [...] Regarding licensing issues: perhaps we should simply ask Malcolm Wallace if he would consider changing the license for the sake of GHC? Or perhaps he could grant a custom-tailored license to the GHC project? After all, the project page [1] says: If that's a problem for you, contact me to make other arrangements. Fyi, Neil talked to him[1]: | I talked to Malcolm. His contention is that it doesn't actually change | the license of the ghc package. As such, it's just a single extra | license to add to a directory full of licenses, which is no big deal. [1]: http://www.reddit.com/r/haskell/comments/351pur/rfc_native_xcpp_for_ghc_proposal/cr1e5n3 ___ Haskell-Cafe mailing list haskell-c...@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: [Haskell-cafe] RFC: Native -XCPP Proposal
On 8 May 2015, at 00:06, Richard A. O'Keefe wrote: I think it's important that there be *one* cpp used by Haskell. fpp is under 4 kSLOC of C, and surely Haskell can do a lot better. FWIW, cpphs is about 1600 LoC today. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: HP 2015.2.0.0 and GHC 7.10
With the advent of efficient build tools, would it be too outrageous to suggest we start to move to automated HP releases on an even faster (but rigid) schedule, e.g. weekly, or monthly? As new versions of libraries come out, they could be incorporated in the platform as soon as they are verified to build successfully with the other dependencies, and the next weekly/monthly bundle would have it. Then there would always be a recent Platform good enough for even the bleeding-edge types of users. It would eliminate the rush of please can we hold off until foo is released in a few days time requests, which has tended to cause schedule-drift in the past. Continuous integration FTW! Regards, Malcolm On 28 Mar 2015, at 02:58, Gershom B wrote: On March 27, 2015 at 10:47:12 PM, Mark Lentczner (mark.lentcz...@gmail.com) wrote: Is soon measured in hours? If not - I suggest that it misses. I'm pushing that we change how we do this Platform thing - and make it stick, like glue, to the GHC release schedule. Sure, this time 'round we'll be out of sync with those it's almost there packages... but next time we'll know it's coming and hopefully, we'll have these panic attacks as GHC is in beta not post-Release. +1 to this sentiment. Now that we can do efficient platform builds, better to release regularly and efficiently. Otherwise we’ll always be playing catch-up to “one more thing.” -g ___ Libraries mailing list librar...@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: GHC 7.8.4 for Windows 32bit
Any progress yet on win32 builds? Regards, Malcolm On 3 Mar 2015, at 14:05, Austin Seipp wrote: Sorry about that; at the time I was having trouble with my build bots and I've continuously had trouble remoting into the Win32 machines I have... The 64bit build bots worked fine, but the 32 ones consistently had flaky internet connectivity, to the point I couldn't bootstrap GHC itself. I tried yesterday again, and seem to have fallen pray to something else (which makes it so I cannot RDP in at all right now, it seems). I've actually assembled a new PC here (that I have not yet installed Windows on) however, so hopefully I can shortly get binaries up. On Tue, Mar 3, 2015 at 7:51 AM, Neil Mitchell ndmitch...@gmail.com wrote: Hi, GHC 7.8.4 was released a little while ago, but there still isn't a Windows 32bit version available. With my MinGHC maintainer hat on we're getting quite a lot of requests for a 32bit 7.8.4 version. With my using Haskell commercially hat on the lack of a 32bit build is causing delays to infrastructure upgrades. Any timeline for a 32bit Windows build? Anything you're stuck on? Going forward, it seems the 32bit Windows version remains popular, so if all releases and release candidates came with such a version it would be great. I note there wasn't (and still might not be) a 7.10 RC, and as a result I've only tested my packages on Linux. Thanks, Neil -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ ___ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
Re: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - feedback on Mac OS
On 1 Jan 2015, at 13:58, George Colpitts wrote: Configuring cpphs-1.13... Building cpphs-1.13... Warning: cpphs.cabal: Unknown fields: build-depends (line 5) Could not find module ‘Prelude’ It is a member of the hidden package ‘base-4.8.0.0’. Perhaps you need to add ‘base’ to the build-depends in your .cabal file. The two statements unknown field build-depends and add package to build-depends seem rather contradictory. How can this be fixed? Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: Is USE_REPORT_PRELUDE still useful?
On 29 Oct 2014, at 00:24, David Feuer wrote: A lot of code in GHC.List and perhaps elsewhere compiles differently depending on whether USE_REPORT_PRELUDE is defined. Not all code differing from the Prelude implementation. Furthermore, I don't know to what extent, if any, such code actually works these days. Some of it certainly was not usable for *years* because GHC.List did not import GHC.Num. I'm not completely certain, but I have a vague feeling that the Haskell Report appendices that define the standard libraries might be auto-generated (LaTeX/HTML/etc) from the base library sources, and might use these #ifdefs to get the right version of the code. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: Tentative high-level plans for 7.10.1
On 6 Oct 2014, at 10:28, Herbert Valerio Riedel wrote: As I'm not totally sure what you mean: Assuming we already had decided years ago to follow LTS-style, given GHC 7.0, 7.2, 7.4, 7.6, 7.8 and the future 7.10; which of those GHC versions would you have been considered a LTS version? We continue to use 7.2, at least partly because all newer versions of ghc have had significant bugs that affect us. In fact, 7.2.2 also has a show-stopping bug, but we patched it ourselves to create our very own custom ghc-7.2.3 distribution. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: The future of the haskell2010/haskell98 packages - AKA Trac #9590
How about doing the honest thing, and withdrawing both packages in ghc-7.10? Haskell'98 is now 15 years old, and the 2010 standard was never really popular anyway. Regards, Malcolm On 30 Sep 2014, at 21:21, Austin Seipp aus...@well-typed.com wrote: Hello developers, users, friends, I'd like you all to weigh in on something - a GHC bug report, that has happened as a result of making Applicative a superclass of Monad: https://ghc.haskell.org/trac/ghc/ticket/9590 The very condensed version is this: because haskell2010/haskell98 packages try to be fairly strictly conforming, they do not have modules like Control.Applicative. Unfortunately, due to the way these packages are structured, many things are simply re-exported from base, like `Monad`. But `Applicative` is not, and cannot be imported if you use -XHaskell2010 and the haskell2010 package. The net result here is that haskell98/haskell2010 are hopelessly broken in the current state: it's impossible to define an instance of `Monad`, because you cannot define an instance of `Applicative`, because you can't import it in the first place! This leaves us in quite a pickle. So I ask: Friends, what do you think we should do? I am particularly interested in users/developers of current Haskell2010 packages - not just code that may *be* standard Haskell - code that implies a dependency on it. There was a short discussion between me and Simon Marlow about this in the morning, and again on IRC this morning between me, Duncan, Edward K, and Herbert. Basically, I only see one of two options: - We could make GHC support both: a version of `Monad` without `Applicative`, and one with it. This creates some complication in the desugarer, where GHC takes care of `do` syntax (and thus needs to be aware of `Monad`'s definition and location). But it is, perhaps, quite doable. - We change both packages to export `Applicative` and follow the API changes in `base` accordingly. Note that #1 above is contingent on three things: 1) There is interest in this actually happening, and these separate APIs being supported. If there is not significant interest in maintaining this, it's unclear if we should go for it. 2) It's not overly monstrously complex (I don't think it necessarily will be, but it might be.) 3) You can't like `haskell2010` packages and `base` packages together in the general case, but, AFAIK, this wasn't the case before either. I'd really appreciate your thoughts. This must be sorted out for 7.10 somehow; the current situation is hopelessly busted. -- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/ ___ Glasgow-haskell-users mailing list glasgow-haskell-us...@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: help understanding the validate_build_xhtml failure of ./validate for my CPP patch
On 23 Mar 2014, at 20:33, Carter Schonwald wrote: Hey all, I'm trying to get my CPP_Setttings patch to validate, and i'm trying to understand why my validate is failing. At this point im stumped and I really could use some help! This looks like the key information: Building xhtml-3000.2.1... Preprocessing library xhtml-3000.2.1... ghc: could not execute: Ghc is reporting that it could not execute . This suggests the commandline for external preprocessing is incorrectly formed. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: Haddock strings in .hi files
On 20 Mar 2014, at 16:41, Edward Kmett wrote: My observation was mostly that I run 'cabal install' it goes through all the modules building my .hi files, etc. Then I run cabal haddock and it spends all that time redoing the same work, just to go through and get at some information that we had right up until the moment we finished building. I'm not wedded to bolting the information into the .hi files being the solution, but the idea that we could avoid redoing that work is tantalizing. One obvious solution could be for Haddock to learn how to read the existing .hi files, solely to read out the type signature of any exported entity that does not have an explicit signature in the source file. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: Proposal: Partial Type Signatures
Since I never used them, I have honestly been under the impression that the TypeHoles language extension named exactly this partial type signatures thing. I have loved the idea of underspecifying the type signature, ever since it was first mooted many years ago. So what does TypeHoles do, if not this? Regards, Malcolm On 12 Mar 2014, at 15:09, Edward Kmett wrote: Clearly given that term-level holes are called TypeHoles, the extension to enable these should be called KindHoles. =) Er.. I'll show myself out. -Edward On Wed, Mar 12, 2014 at 9:35 AM, Thomas Winant thomas.win...@cs.kuleuven.be wrote: Dear GHC developers, Together with Tom Schrijvers, Frank Piessens and Dominique Devriese, I have been working on a proposal for adding *Partial Type Signatures* to GHC. In a partial type signature, annotated types can be mixed with inferred types. A type signature is written like before, but can now contain wildcards, written as underscores. The types of these wildcards or unknown types will be inferred by the type checker, e.g. foo :: _ - Bool foo x = not x -- Inferred: Bool - Boo The proposal also includes a form of generalisation which aligns with the existing generalisation that GHC does. We have written down a motivation (when and how might you use this) and details about the design and implementation on the following wiki page: https://ghc.haskell.org/trac/ghc/wiki/PartialTypeSignatures We have a (work in progress) implementation [1] of the feature based on GHC. It currently implements most of what we propose, but there are some remaining important bugs mostly concerning the generalisation. We also described our design and presented a formalisation based on the OutsideIn(X) formalism in a paper [2] presented at PADL'14. What we are hoping to get from the people on this list is any of the below: * Read the design, play with the implementation and tell us any comments you may have about the feature, its design and implementation. * Opinions on whether this feature might be acceptable in GHC upstream at some point (if not, we do not think it's worth developing the implementation much further). * Perhaps a code review or a discussion with someone more knowledgeable about the internals of GHC's type checker about how we might fix the remaining problems in our implementation (specifically, we could use some help with implementing the generalisation of partial type signatures). * Feedback on the `Questions and issues' section on the wiki page. Kind regards, Thomas Winant [1]: https://github.com/mrBliss/ghc-head/ [2]: https://lirias.kuleuven.be/bitstream/123456789/423475/3/paper.pdf Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: cpphs bug
This appears to be a cpphs bug. For the following code #define x (1 == 1) #if x YES #else NO #endif cpphs 1.18.1 prints NO, while the expected output (and the output GNU cpp produces) is YES. I acknowledge that this is a bug in cpphs OK, bug now fixed in version 1.18.2. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: cpphs bug
On 21 Feb 2014, at 16:04, Roman Cheplyaka wrote: This appears to be a cpphs bug. For the following code #define x (1 == 1) #if x YES #else NO #endif cpphs 1.18.1 prints NO, while the expected output (and the output GNU cpp produces) is YES. I acknowledge that this is a bug in cpphs. It is actually a bit worse than this - the following code also outputs NO with cpphs, when it should clearly be YES: #define x 0 == 0 #if x YES #else NO #endif It is all to do with the recursive expansion of symbols during parsing, where currently we are interpreting the first symbol in the expansion as the intended boolean value, rather than re-parsing the whole expanded expression. Fixing it will be a little bit more involved than just adding another clause to the parser, but I'm on it. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: Another CPP gotcha for the manual
Of course, cpphs solved this problem nearly a decade ago. Regards, Malcolm On 3 Nov 2013, at 17:57, Mark Lentczner wrote: Turns out that c pre-processing removes C style (/*…*/) comments. Of course, it will do this anywhere in the source. The following program can tell if it is compiled -cpp or not: module Main where (*/**) :: Double - Double - Double (*/**) a b = a / (2 ** b) howCompiled = if use 1/16 then normal else -cpp where use = 1 */** 2 + 3 */** 4 main = putStrLn $ compiled ++ howCompiled When run: runhaskell CppTest.hs compiled normal runhaskell -cpp CppTest.hs compiled -cpp An example in the wild is in the package wai-extra, in the file Network/Wai/Middleware/RequestLogger.hs where the */* construct appears twice in the comments. Short of defining and implementing our own CPP-like preprocessing (something we might actually consider), I don't think there really is any fix for this, so the bug is that it should appear in the GHC documentation on CPP mode (§4.12.3), along with similar warnings about trailing back-slashes. Note that the way in which a multi-line comment is removed differs between gcc and clang. In gcc, the comment is removed and content of the line before the comment, and contents of the line after the comment are joined into a single line. In clang, the two line fragments are kept on separate lines. In both cases extra empty lines are added to keep the line count the same. The consequence of the gcc / clang difference is that neither the above code, nor wai-extra will compile with clang. Note: As far as I can tell this is not a clang bug, but a failure of specs: The C definition of comments and their removal is vague, and my guess is gcc choose its method based on historical use. The C++ definition makes it clear that comments are whitespace, even once removed, and so the clang method is correct. - Mark ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: Another CPP gotcha for the manual
AFAIK, it is solely the (L)GPL licence issue. GHC central preferred to use/distribute the GPL'd gcc compiler rather than the GPL'd cpphs preprocessor. (No, it made no sense to me either.) Regards, Malcolm On 4 Nov 2013, at 09:40, Herbert Valerio Riedel wrote: Hello Malcolm, On 2013-11-04 at 10:28:27 +0100, Malcolm Wallace wrote: Of course, cpphs solved this problem nearly a decade ago. Btw, what has been the reason it hasn't been adopted as bundled `cpp` replacement in the GHC distribution in the past? (if it remains a separate executable, its GPL licence shouldn't be an issue -- after all, ghc relies on the gcc executable which is GPL'ed too) cheers, hvr ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: NOTICE: Gitolite migration is complete.
On 9 Aug 2013, at 23:19, Austin Seipp wrote: * None of you have shell access to ghc.haskell.org anymore (well, this is a nice update for us administrators :) Is ghc.haskell.org the same machine as darcs.haskell.org? I notice that my nightly build of nhc98 failed a few hours ago with the error: darcs failed: Not a repository: darcs.haskell.org:/home/darcs/nhc98 ((scp) failed to fetch: darcs.haskell.org:/home/darcs/nhc98/_darcs/inventory) whilst it certainly succeeded the night before. Also, manual use of ssh now refuses me access to the machine: $ ssh darcs.haskell.org Permission denied (publickey). Anything you can do to fix that? Darcs depends on the ssh protocol for transfers. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: how to checkout proper submodules
On 5 Jun 2013, at 16:47, Austin Seipp wrote: testsuite and base are also useful for other compilers, such as nhc98 (and indeed, nhc uses base itself.) Useful, perhaps, but not actually used in practice. Since the base library repo moved from darcs to git, I think that ghc is the only compiler that uses it. (Maybe the jhc, uhc, or Helium people could refute that though.) For a long, long time, the close coupling between ghc and the base library has been obvious. I have long since given up trying to pretend that base is portable - it is not. It is ghc-specific. I don't think it should be. That is a crazy architecture. But it is the way it is. Maybe it is time for everyone else to stop pretending too. Regards, Malcolm ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
Re: Advance notice that I'd like to make Cabal depend on parsec
On 14 Mar 2013, at 14:53, Duncan Coutts wrote: Why did I choose parsec? Practicality dictates that I can only use things in the core libraries, and the nearest thing we have to that is the parser lib that is in the HP. I fully agree that a real parser is needed for Cabal files. I implemented one myself, many years ago, using the polyparse library, and using a hand-written lexer. Feel free to reuse it (attached, together with a sample program) if you like, although I expect it has bit-rotted a little over time. Regards, Malcolm cabal-parse2.hs Description: Binary data CabalParse2.hs Description: Binary data ___ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs