> mk vs. nmake was kicked around for some time around 20 years ago and
 > nmake won.  nmake was able to scale up better to large projects.

I would like to hear more about that.  Certainly the mk model is not
aware of directories.  But I love the simplicity of mk.  Were I doing
it over, there are very few things I would do differently.  One
exception is that mk suffers from a serious phase-ordering problem: mk
itself cannot be used to compute dependencies and then use them to
make decisions.  Instead, it's necessary to have a separate 'compute
dependencies' phase, write the dependencies into a file, and then run
mk again.  Also, if there is a single dependencies file, it is
expensive to compute, so it is generally not generate automatically by
mk.  Anyway, this is clearly broken, and a model which enabled one to
compute and memoize dependencies dynamically would be a step forward.

What about nmake made it better able to scale?

 > Most of the complexity of nmake is related to the language it uses
 > to construct rules for projects and new languages.

I would like to understand the sources of this complexity, how much is
necessary, how much is accidental, and so on.  Is there a paper on
nmake or perhaps a chapter in your group's book? (Practical Reusable...)

 > Most of what you need to know is contained in the introduction
 > to nmake ast http://www.research.att.com/~gsf/nmake.

I've looked at this page before, and I am quickly overwhelmed by the
large number of predefined rules and operators, which seem to be
introduced in a rather discursive fashion, with little hint of the
underlying model that must have been in the designer's mind.

The version of mk I am using now emerged from Vita Nuova via plan9,
and although this distribution channel has uglied up the code
something fierce, it has one property I really like: there are *no*
predefined rules.   If I need something predefined I generally start a
mkfile with 

<$HOME/lib/mk/builtins

and I am in business.  But everything that is there is something I
understand and control.

 > Since nmake is able to handle files in multiple directories,
 > recursive makefiles are often not needed.

mk can do this as well, but the paths to those files must be given.
But I confess I have far too many recursive mkfiles for building
software and web sites for the classes I teach.

 > However, for a collection of independent tools, nmake is able
 > to scan the makefiles is the directories and decide the order
 > to build the tools underneath.

Does it do dependency analysis on the action blocks (if that is the
correct term) in order to learn which tools are built using which
other tools?  What is going on there?

 > The equivalent of autoconf and configure is iffe (IF Feature Exists)
 > which is a small language that allows you to ask questions such
 > as does the library rountine exist in library x, or does this symbol exist
 > in header y.

I'm on board with iffe, although it's very C-centric and so not of
much use to me.  My only complaint about iffe was that it was slow,
but that was 15 years ago and I'm sure it's faster now :-)
I vaguely seem to remember that Phong wrote an SP&E paper on iffe, but
I could be making that up.


Norman
_______________________________________________
ast-users mailing list
[email protected]
https://mailman.research.att.com/mailman/listinfo/ast-users

Reply via email to