So! I've been
watching the whole "script/don't script" debate with
interest,
so I thought I'd
weigh in with my opinion (for whatever it's worth) based
on my experience
with developing, maintaining and managing automated
builds for a variety
of projects, ranging from vectorizing/parallelizing
compilers with 100s
of modules and millions of lines of C code, through
Windoze device
drivers for a multimedia co-processor which contained
code compiled for
our multimedia coprocessor's instruction set in addition
to the standard x86
code, to today's 10s of moderately sized Java-based
projects with a goal
of having a build infrastructure that is shared between
them, in order to
improve the "portability" of developers between projects.
Let me start of by
saying that a "build infrastructure" is more than just
the build tool,
though the build tool is certainly at the heart of the
infrastructure. No build tool, not gnumake, ant, cons, or way cool
GUIBuild,
can meet the needs
of all development environments or projects "out of the
box" there is always
at least *some* level of customization that takes
place. In the
simplest cases, this customization amounts to writing the
input file to
describe the project to your build tool (i.e. makefile,
build.xml,
etc). In the
more common and complex cases, it may start out that way but
evolves over time to
a common set of "cliche's" that are appropriate for
that project or that
company's projects, and the individual build description
files become simpler
and simpler, while the build infrastructure becomes
more complex, hiding
the complexity from the individual project description
files.
Consider a gnumake
based example: we had a tree of some 100+ modules
that had to be
compiled, linked, etc. in some cases the source had to be
built using an x86
compiler, in other cases using our propriatary compiler.
Initially, each
makefile provided its own list of compiler options, compiler
locator macros, etc.
etc. So every time we needed to make a global
change, we were
editing 100+ makefiles, with each makefile amounting
to well over 100
lines. Through a careful process of extraction, we
developed
a "master" makefile
(two actually, one for macro definition, the other for
target definition)
that defined all the common elements. This reduced
the size of each
module's makefile to an average of 10 lines: define
the module's
location w.r.t. the top of tree, include the master
makefiles,
define the target
files to be built, and the source for each target. The
master
makefiles handled
everything else (setting options, building for the
appropriate
architecture, etc.) Once this was done, implementing a
change
for the build (i.e.
swapping out one compiler for another) was a simple matter
of editing the
master makefiles. Developers were thrilled because
they
didn't need to know
the complex syntax of gnumake and could just define
the information that
was relevant to them: what to build using what sources.
Built on top of all
of this was a set of perl scripts to handle a turn-key build
process from start
to finish, including checking out source, building,
generating an
installer, and exporting the installer image to a central
build
image
server.
Now, let me tie this
into my perceptions of the ANT script/no script debate...
The people who want
fancy scripting capabilities WITHIN a build.xml file
are
viewing ant as the
complete build infrastructure. I believe this to be a
mistake,
ant DOES NOT stand
on its own, it requires an infrastructure built around it
that is appropriate
to the development environment in which it is deployed.
In
ant's case, this
"infrastructure" will consist primarily of a set of
site-specific
extensions to ant
(i.e. the stuff in optional.jar and more) that define the
common
elements of that
site's build environment. In essence, ant ALREADY has scripting
-
it's called
java!. The difference, of course, is that there needs to be a mechanism
within ant to ensure
that those extensions get updated before ant goes and
starts evaluating
build.xml, since we need to ensure that the latest tweaks to
the build
infrastructure are implemented each time we build. It is certainly
possible,
(and to my mind,
highly desireable) that people will publish their extensions
to
ant (just as people
publish their extensions to Perl on CPAN) and that others
may find those
extensions to fit their needs to a greater or lesser extent, and
people will "tune"
extensions they get from this repository of published
extensions
to fit their own
unique needs...
In summary: core ant
+ local extensions + build.xml + ??? = build infrastructure
(where ??? may or
may not be necessary, but might be, for example, a Perl
script to bind it
all together for turn-key building, but might equally well
be
an all-java sort of
thing).
So! What does
all this drivel mean for ant? Let me offer my personal opinion
--
1. Extending
ant should NOT require modifications to core ant -- This
means,
for example, that if I want to add a dependency
scanner task for C code, that
I DO NOT have to modify the ant parser to add an
understanding of the
<CDepend> tag, but that the core ant
parser sees the tag and knows to
look for a CDepend class which extends the
"task" interface, etc.
2. There
must be accurate USER documentation for the "task" interface
that
defines the interaction between core ant
and the extensions thereto.
3.
There should be a way for a task to read and write the values of
properties (build variables) but
there should also be a hierarchy of
permissions of properties (i.e. some
should not be writable by
tasks, etc.)
4.
There should be a way for a task to influence the dependencies of
other
tasks (i.e. if I implement a
dependency scanner, how does that scanner
notify ant of the dependencies it
finds?)
5.
There's probably more, but I think this covers the major
bases...
Thanks,
-Peter
--
Peter A. Vogel
Manager, Configuration Management
Arsin
Corporation
4800 Great America Parkway Suite 425, Santa Clara, CA
95054
