I am in the process of writing a Perl distribution that currently
contains a dozen or so packages, several dozen exportable subroutines,
and over a hundred exportable constants. It has a straight-forward
implementation of parallel-named module files (*.pm), documentation
files (*pod), and test scripts (*.t). I plan to add more ideas and code
into the distribution.
Once upon a time, I created a distribution with circular dependencies.
Then I discovered that a compiler failure in one module that is part of
a circular dependency loop will cause a domino effect whereby none of
the modules in the loop will compile (!). The same goes for all the
test scripts that depend upon any of those modules (!). I recall that
finding and fixing problems under such circumstances could be very
tough. I also recall throwing away my entire working file set, doing a
fresh check-out, and starting over from scratch on more than one occasion.
Since then, I have applied a hierarchical module dependency architecture
to avoid circular dependencies:
1. Modules and test scripts that do not depend upon any other modules
within the distribution are "dependency level 0".
2. Modules and test scripts that depend only upon level 0 modules are
3. Modules that depend only upon levels 0 through N are level N+1.
I have experienced mixed results over the years in applying the above:
1. Documentation files and modules composed solely of constants are
readily implemented into level 0.
2. Modules with subroutines are the meat of the problem.
3. Test scripts can be even harder, especially when testing lower-level
4. The hierarchy tends to become tall and narrow as modules are added.
5. The greater the scope and/or complexity of the distribution, the
harder it becomes to practice the religion.
6. Cut-and-paste can solve simpler cases; but cut-and-paste creates it
7. Throwing a set of interdependent subroutines into one giant level
module can solve the harder cases; but large source files create their
own problems, and what about package vs. module names?
8. Going to the other extreme -- one module file per subroutine --
seems unwieldy, but I haven't fully explored this idea.
Certain kinds of modules are paradoxes for me. For example, modules for
testing. How do I test the testing module using the testing module?
(Answer: I gave up and used other means to test the testing module.)
I have toyed with Filter::cpp a little and recall seeing some really
awesome C macro code back in the day, but that degrades portability.
I haven't had much luck RTFM or STFW on this narrow subject.
How do other module authors solve the intra-distribution module
p.s. I realized that I have been thinking of dependencies in terms of
packages and subroutines:
- Foo::foo() calls Bar::bar().
- Therefore, Foo depends upon Bar.
But for the simple case, the tools look for dependencies based upon code
- Foo.pm contains the code "use Bar qw( bar );"
- Therefore, Foo.pm depends upon Bar.pm.
The two modes of thinking are mostly interchangeable when there is a
one-to-one mapping of packages::subroutines to module files. But, as I
understand it, Perl allows one-to-many, many-to-one, and many-to-many
mappings. (Dynamically generated code adds many-to-zero.) Perhaps this
flexibility combined with the latter realization can facilitate other