We *do* think that stability is important (which is why we're using validate now), and all the changes that have been made recently have been made in good faith for good reasons. That's not to say I don't think people have valid concerns - but let's deal with the motivation first. Here's some text I just sent to Manuel & Roman before I saw this thread:

----------
Let me say I'm sorry that you're disillusioned with the build system. I'll try explain why things are the way they are.

Before we switched to using Cabal for building libraries, we had essentially most of the functionality of Cabal duplicated in Makefiles, and all the packages had to have extra metadata to support both Cabal and the GHC build system. The two sets of metadata had to be kept in sync. It was a maintenance nightmare.

We need Cabal because we need to be able to build packages outside of the GHC build system, this is the reason that Hackage is flourishing and Haskell is successfully emerging from its rut. And the things that Cabal needs to do are almost exactly the same as the things we need to do in the GHC build system; it makes no sense to implement all these things twice, in two different languages. We can't implement Cabal in make, because we don’t want to rely on the target system having make or a set of Unix tools at all.

So clearly the best way to avoid all this duplication was to use Cabal directly. But in order to do that, we had to make sure that we didn't lose things like 'make -j' - so I added support to Cabal for generating a plain Makefile (in addition to the existing support for ghc --make, remember that we don't depend on make outside the GHC build system). This is what we use now - Cabal understands the package metadata (the .cabal file) and spits out a Makefile that the rest of the GHC build system invokes, and can also be invoked by hand (e.g. for building an individual module with -ddump-simpl). The Makefile is also extensible in the sense that it #includes a user-supplied Makefile that can be used to add extra rules and settings.

What have we lost? Well, now a large chunk of our build system is not private to GHC but shared with the rest of the world, and that makes it harder to change. This hasn't been an issue so far, since most of the functionality we need for GHC is also useful (or necessary) when building packages separately. In a few cases where we needed to do something a bit special, Ian has added custom tools based on the Cabal library - e.g. installPackage.

So while I'm not claiming that our build system is ideal - clearly the fact that it's a generic framework means that it's not as simple as a GHC-only build system - I don't think there are any fundamental problems here. The build system is still extensible in the sense that

(a) to do something in GHC for *every* package, add code to libraries/Makefile,
      or Makefile.local
(b) to do something for *one* package, modify that package's Setup.hs or .cabal file (c) to do something for every package both in the GHC build system and outside,
      modify Cabal.

Cabal is not as inaccessible as it seems - Duncan and the other Cabal contributors have put a lot of effort into making it easy to hack on. There's a source guide for example (http://hackage.haskell.org/trac/hackage/wiki/SourceGuide).

----------


Austin Seipp wrote:

1. Parallel builds (i.e. make -j, brought up by ChilliX)

not true - make -j works just as well as it always did. We'd like to be able to build libraries in parallel with each other, but that needs some extra support in the build system for understanding inter-package dependencies and scheduling the build.

2. A working build system (and by association a buildable HEAD)

Why the switch to a cabal infrastructure? To stress test it? To make
building simpler (i.e. win32)?

Yes, it makes building simpler.

To prove a point? What's the end result
and what do we hope to gain? The old build system wasn't broke; why
fix it?

When we used to use make instead of Cabal to build the libraries, the build system certainly *was* broke, in that a lot of functionality and metadata was duplicated, and often inconsistent.

The recent change to use Cabal for building GHC is mainly for simplicity and consistency, but also brought some benefits: we can now Haddock GHC.



As I see it, the biggest problem is that the Mac OS X build keeps breaking, because we don't actively test on that platform. We *do* test on other platforms: in fact we use the validate script that was originally proposed by Manuel & Roman.

A buildbot would help, but it needs to be online all the time and start whenever new patches are pushed. Alternatively we could try to set up some kind of patch-submission filter that runs validate on 3 platforms, but that's a lot of infrastructure to implement and maintain. I think we could probably be persuaded, but it is a big investment and we need to be sure that it's going to pay off.

So this is a useful discussion to have. I've presented the motivations as I understand them - for the most part the work Ian has been doing seemed like the "obvious" way to go to us, which is why we didn't have a big open discussion about it, but clearly others feel differently. I'm not by any means saying that this is the way it must be done - we're certainly open to suggestions for improving things, even if that means ripping it all out and starting again (but of course that would be a much longer-term goal and needs a lot of planning first).


Cheers,
        Simon

_______________________________________________
Cvs-ghc mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/cvs-ghc

Reply via email to