On Mon, Feb 17, 2020 at 8:18 AM <[email protected]> wrote: > On 2020/02/16 21:25:42, hanwenn wrote: > > please, for the love of god, do not use automake. > > > > It is slow and arcane, and generally a complete PITA to work with. We > created > > stepmake after fighting with automake for a while. > > Do you have concrete numbers for automake being "slow" and are you sure > that it's still the case?
My experience has been that automake is built on top of make, and whenever you touch a file (eg. Makefile), it is prone to rerunning itself. This then incurs another configure step, because if Makefile.in changes, you have to rerun configure (or config.status) again. It is also fundamentally a Makefile-per-directory approach, so it is fundamentally broken. It doesn't do wildcards, and they're proud of it ( https://www.gnu.org/software/automake/manual/html_node/Wildcards.html), so if you add or rename a file, you have to rerun automake for the build to pick it up. It is easy to forget such manual steps, which leads to hard to debug problems. The whole infrastructure stack is terminally broken. The standard GNU build system is a stack where you run a 250kb Perl program (long live the 1990s), to construct the input for a 500kb m4 program (long live the 1980s), so it can generate a 500kb shell script to find out where the compiler lives. The shell script is compatible with pre-3.0 UWin KSh. The resulting makefile (long live the 1970s) is compatible with a host of odd-ball versions of Make. For building the software, you then have to run libtool, a 300kb shell script. The shell script is compatible with all kinds obscure unixes. So, it'll be possible to build a dynamic library for lilypond on Solaris 4 and SGI Irix 5. (long live the 1990s). The GNU build technology makes things very complicated so you can use it on obsolete platforms. > At the moment Stepmake is just broken in so > many points (*) and a nightmare to maintain. So IMO it would be ok to > trade off 5% performance during configure (which we can easily offset by > removing outdated checks) with probably the same performance during > builds - maybe even better. > There is some value in fixing our build system, but it may be smaller than you think. The C++ part of LilyPond is utterly trivial to compile. The hard part is that the fonts, the documentation and the regression tests have custom logic, and if you move to a different system, you'll have to rewrite that logic from scratch. If you do this in Automake, this will be extra hard, because you cannot write the Makefiles directly (remember, automake output is compatible with Make versions that don't support $(wildcard), so you write the input to the automake process). So you have to rewrite it to be somehow compatible with Automake's idea of a makefile. The real problem with stepmake was that we tried to make something generic and usable in other programs. That has caused some over-engineering, and it would be good to clean that up. We could improve it by moving to something else (Cmake+Ninja? Scons?), but before contemplating a move, it would be good to decide what precisely is the problem. And I would stay a away from whatever the GNU project thinks is a good idea, because it was utterly broken 20 years ago, and even more so now. (*) Take this change as one example. Another one that hits my mind > (because I experienced it yesterday) is how our build system reorders > compilation, ie when invoking a serial `make' you don't get the same > order of .cc -> .o files. This is really insane if you work on headers, > want to fix one particular issue in one translation unit and suddenly > get errors + warnings from another file! > I don't think there is an innate guarantee of ordering, but you can probably make this more predictable by calling $(sort ) whereever we call $(wildcard ). Also, if you do parallel compilation, all bets about ordering are off. If you need to work on a translation unit, just run "make out/unit.o" -- Han-Wen Nienhuys - [email protected] - http://www.xs4all.nl/~hanwen
