Chris Seawood wrote:

> I've run this by a couple of people and I think we've thought it
> through to the point where we need more input from people who might be
> affected.  I want to completely separate our interfaces from the
> implementation of said interfaces.  Practically speaking, I want to
> move the public headers & idl files from their respective places in
> the tree into a module heirarchy under mozilla/include/ .

Travis raised this when he was here, and I threw stones.  I've got lots 
of ammo still :-)

> 
> This change would immediately give us 3 wins:
> 
> 1) Building ind. modules will not require the entire tree. 

But independent modules should not have to have their headers and idl 
files separated and forced under a Mozilla-knows-all hierarchy.  Let's 
suppose you do manage to get some set of modules, let's say the 
"embedding Gecko" ones, to so divorce their interfaces from their impls, 
and move the headers and idl files under an include source directory.  
Now we want to integrate a media player, or JS2, or something new that 
itself stands alone and can be linked into other programs.  Why should 
the new module's unitary source tree have to be split, just to be 
integrated into the mozilla tree?  I can say right now that JS2, which 
would want to be embeddable in Apache and other programs, won't split 
itself up across two subtrees.

There is no hierarchy that suits all needs.  We long ago (well, kipp and 
I, in the "ns" tree circa 1995, based on SGI's build system) made an 
export pass to allow diverse modules to be built by visiting their 
unitary trees during one make pass, leaving exportables such as .h and 
.idl files "published" to a common build-time tree (not a source tree).  
The only objection I've ever heard to this scheme is that it takes too 
long to run two passes instead of one pass.  Are there really other 
objections?  If not, how much longer does it take?  Are there other ways 
to speed it up (Unix builds already symlink-install rather than copy .h 
files, e.g.)?

There are other reasons besides the unitary subtree as 
separate/standalone module to let various owners arrange their sources 
as they see fit.  For one thing, owners tend to change their minds about 
source file names and subdirectory structure, and it's really none of 
our business to try to dictate such things.  I for one am loath to make 
wtc move all the nspr public headers out from under 
mozilla/nsprpub/pr/include, mozilla/nsprpub/lib/ds/*.h, and the many 
other dirs in which they live *as source* (as opposed to the single 
include directory into which they are composited via symlinks, during 
the export phase -- another reason for an export phase).

This is the same reason why it's a bad idea to "reach around" from one 
independent module, up the ../../.. path to the mozilla/ directory, and 
down into another independent module.  If those modules are independent, 
i.e., owned by different people who have no claim on one another's 
subdirectory structure, then hardcoding such paths is a recipe for pain.

> Right now, you cannot even build xpcom standalone without pulling the
> entire tree or keeping a separate list of dirs to pull that gets
> quickly outdated. This change would allow us to just build include/ to
> get all of the interfaces and then just build the specific module that
> you are interested in.  This would be a huge win for embedding. 

Except that we'd still have the lack-of-REQUIRES problem that xpcom 
depends on netwerk and possibly prefs (or used to) when built 
"non-standalone" and/or Debug.  Why pretend xpcom is standalone if it 
still depends on a ball of string's header files, or rather, the header 
files for unknown numbers of strings tangled up in the mozilla/include 
ball of string?  Tolerating or, as you propose, exacerbating that 
situation only guarantees that xpcom will entrain more and more 
bogo-dependencies.

> 2) Inter-module dependencies will be better defined.
> 
> This change would incorporate the work being done in bug 59454 which
> is to clearly state inter-module dependencies via the use of
> REQUIRES. This will force engineers to actually think about the code
> in terms of individual and separate modules not just as part of a
> monolithic browser.  This is also key for our various embedding
> efforts. 

(Warren was right; waterson shouldn't have removed the REQUIRES 
requirement!)

We don't need to tyrannize header and idl files from various modules out 
of their local subtrees and into an include subtree to fix that bug.  
All we need to do is require REQUIRES.  It used to work by segregating 
different modules' exportables under subdirectories of $(DIST)/include, 
and could again.  There is no necessary or sufficient relation between 
REQUIRES and this 
all-public-exportable-sources-must-move-to-a-central-include-subtree 
idea.  IOW, the mozilla/include/... subtree of module directories 
containing exportables for each module can exist not under CVS, in the 
repository, but as a tree of symlinks under $(DIST)/include (don't have 
symlinks? find something like them, or copy files -- or get a real OS! :-).

> 3) Single pass build
> 
> Since the public interfaces will be exported from a centralized
> location, there is no need to traverse the tree once just to export
> the interfaces.  Since not everyone has a 1.5 ghz PC with ultra160
> scsi drives (I sure don't :-P), this will help a lot as well. 

How much?  I really don't find that the export phase takes very long, 
when I have a lot of files to recompile -- the compiler and linker take 
orders of magnitude longer.  When I don't know what needs to recompile, 
the time is worth it -- I can't roll the dice and try recompiling 
selectively without losing big, too often.  Of course, if I do know for 
sure that files in only two modules changed (because I changed them, or 
because I watched what cvs pulled very carefully), then I'll run the 
minimal subdirectory makes by hand.

> In fairness, I must note the downsides as well.....there aren't any! ;)
> 
> Ok, potential downside, modules will no longer be contained in a
> single directory heirarchy.  To pull _just_ xpcom source, you will
> always have to pull mozilla/include/xpcom as well as mozilla/xpcom.
> But since modules cannot really be built standalone (you always need
> to have the requisite header files), I wouldn't consider this a valid
> concern. 

I'm not sure what you mean.  The JS engine can be built standalone, it 
depends only on NSPR when built in the context of Mozilla-the-browser 
(when built with JS_THREADSAFE defined).  There is no unknown hairball 
of dependencies, nor should there be.  The same goes for xpcom -- it's 
absurd for it to have cyclic dependencies on netwerk.  That's a bug for 
us to fix, not a vice to call a virtue.  If you make everything depend 
on everything else's public headers, we'll have more, not fewer, 
embedding and subsetting woes.

> For the intial proof-of-concept run, we will probably just use the
> mac's MANIFEST & MANIFEST_IDL files to generate the list of interfaces
> to export.  But that will really only allow us to implement the single
> pass build & maybe the inter-module dependencies tracking.  To take
> full advantage of the proposed changes would require copying the
> interface files to include/<module> in the cvs repo. 

I'm sorry, but I'm not going to agree to have my module's files so 
copied/removed.  You'll break the JS standalone build that has been used 
by 3rd parties for five years.  I'm not about to waste time writing 
packaging scripts or whatever to recreate the unitary source 
distribution that standalone JS users expect and pull now via cvs co 
mozilla/js.  Ok, I'm being a hardcase here to show that, unless you're 
going to try to make me do extra work to give up my unitary subtree, 
your scheme won't be complete -- you'll have to deal with me, wtc, and 
whoever else disagrees (luddites that we are!).  That makes my point: 
the mozilla tree is too large, and composed of too many different 
owners' modules, to make all headers and idl files fit on a Procrustean 
bed.

> Any thoughts? 

What problem is really being solved here?  Not the lack-of-REQUIRES 
problem -- we had that before, along with symlink-install of diverse 
headers to $(DIST)/include/<module> subdirs.  If it's the slowness of 
the export phase, I'm tempted to say "get a faster machine" -- but I'd 
be open to speeding up what can be optimized.  As long as we're using 
gmake and lots of pattern rules in common .mk include files, I think 
make speedups can win a lot more than tree reorgs that go against 
distributed ownership and truly standalone modularity.

/be

> 
> 
> - cls
> 
> 
> 
> 
> 


Reply via email to