Sander Temme wrote:
One aspect of TCAHMAN that I hadn't covered in my original discussion
is how to add modules from the repository to an existing Apache
install. This would require a program, installed with the server, that
can fetch the module code and run the build/install. We have
tentatively named this program apxs++ since it's a logical extension of
what apxs does today. For maximum compatibility, this tool would have
to be written in C. Currently, apxs is a Perl program but you can't
always count on the availability of Perl on the system, especially on
Windows. The apxs++ tool would be available only when mod_so is available.
I think we did discuss this a little, though my memory is faded too.
I don't see the problem with apxs being perl. Sure, it is not quite
universally available, but then neither is a C compiler, which is
another prerequisite for anything APXS-like. That implies at least
the option of the tool downloading a binary (and its dependencies
where applicable), and it no longer really looks like apxs.
So why not preserve apxs as-is - perhaps enhanced to use the new
archive for those who do have the prerequisites - and make a clean
new start with the new tool?
If you're linking modules statically, you're compiling your own httpd
and should be able to fetch the source code for the desired module(s)
before you start compiling.
I don't think we should worry too much about supporting static compiles.
Rather concentrate on making sure everyone has a dynamic build unless
they consciously override that, for their own reasons.
All this goodness, if and when it happens, would be run from a newly
created httpd-modules subproject. We discussed the proposed nature and
structure of this subproject (which itself has not been proposed yet)
and the general idea seemed to be that we creat a flat sandbox where
module developers can commit to everything. Every httpd committer
automatically gets httpd-modules, and the subproject could be a
breeding ground for new httpd committers. If and when a module develops
its own community, it can get its own subproject (example: mod_python)
or even go top level (example: mod_perl, mod_tcl). The httpd-modules
subproject would also own the repository code.
This blurs the distinction between bundled and addon modules. That may
be no bad thing, but needs to be thought through. Do we, for example,
officially accept bug reports for third-party modules in our bugzilla?
'Cos if not, that's a potentially nasty limbo state for the end-users.
The TCAHMAN system would be targeted at:
a) builders (who build their own Apache)
b) enhancers (what did I mean by this? Perhaps folks who want
to hang additional modules into an existing Apache?)
c) packagers (TCAHMAN could register installed modules with the
various package registries out there, giving
httpd packagers a powerful way to manage the
installed core and modules)
d) Testers (perl-framework) (Not sure what I meant by this)
e) Any server admin
We would initially populate the repository with modules that were
formerly in the core, and eventually open it up to third-party module
developers. Having easy access to modules through TCAHMAN will allow us
(httpd) to lighten the distribution
Are you thinking of even the very-core modules? I think that could make
sense, provided the default install always includes them unless the
builder makes a conscious decision and overrides a warning about "some
core functions may not work".
Once we open the repository up to third-party developers, we may have
to do a 'click through' (or key through) acknowledgement that we (ASF)
are not responsible for code that is not ours. IANAL, so I don't know
what is required/comfy.
Indeed.
The TCAHMAN repository would utilize our existing mirror
infrastructure, and would be a great service to offer third-party
developers.
We discussed CPAN, from which a lot of people blindly and trustingly
download module upon module, as root. How did this get so trusted? Who
is responsible for the code? We hear that nobody owns CPAN, and there
is no identifiable target for any legal action anyone might want to
bring. This obviously wouldn't fly for the ASF.
The designated front for TCAHMAN would be modules.apache.org, which is
currently run by Covalent. We would run TCAHMAN on our own
infrastructure, so we'd need to get the vhost back from them. While
this is technically really easy (we own the DNS for apache.org, after
all), it would be a good thing to arrange a smooth transition.
Every module uploaded to the network would come with metadata,
including (but not limited to):
* License
* Versioning (compatible with (1.3, 2.0, 2.1, ...), not before, not
after (MMN?))
* Documentation URL
* Author info
* Build options
* Dependencies (e.g. external libs)
* Exports (e.g. modules that provide or consume an API)
* Restrictions (e.g. a non-threadsafe module will specify prefork)
* Status (ASF supported? Third-party supported? Unsupported?
ASF Endorsed?)
Tasks:
* Write apxs++
* Define module metadata
The metadata comes first!
* Write the backend
* Take back modules.apache.org
I think the core of this is to get the metadata right. This leads to
a corollary: can/should we adopt an RDF format for the metadata?
One that'll be supported by the existing desktop and other readers
widely used for newsfeeds? That would provide the ideal hook for
client implementors to hang a range of alternatives (source/binary,
perl/noperl, gui/auto, ...) to your apxs++ on.
--
Nick Kew