The following is based on some of my assumptions, feel free to correct as need be :-)
If the source tree wasn't so large I assume anyone building the code would be expected to download the entire source and run a top-level "./configure" and "make" once (like most any other project). I assume when the modularization was taking place it was decided to make each module shoudl be separate and stand-alone so a developer could work on some subset and not have to download the entire source tree. I assume the reason things like build.sh exist is because there is no top-level ./configure script. A top-level configure script would be able to sort out the dependencies and build everything in the correct order, but it would expect all the code to be available since it would need to generate files (e.g. Makefile) everywhere, even in submodules which won't get built. So if there were a top-level ./configure script it wouldn't be able to run unless all the code were available. build.sh is more flexible in this way. As a thought experiment, if we did use something like cmake would it be able to solve the problem of not having to download and have available all the code in order to build? Switching each of the existing modules to cmake from the autotools wouldn't solve the problem of not having a top-level cmake or ./configure script, would it? We'd still need something to tie it nicely together and just build and download a subset of the available code, no? _______________________________________________ [email protected]: X.Org development Archives: http://lists.x.org/archives/xorg-devel Info: http://lists.x.org/mailman/listinfo/xorg-devel
