Noel wrote:

> Adam, and how is said tool going to start in the first place?  Without
> meta-data, there is a limit to what the tool can do.  Basically, it would
> have to operate relative to the URL provided to it.

My input here is primarily based on writting Ruper
(, a tool that attempts exactly what I said.
It is given links to repositories (local or remote), it read the
repositories and allows queries into that repository based on attributes of
the resources. It does this by parsing the URLs.

You don't have to like the tool, I'm not trying to push the implementation,
I'm just giving you experiences from that tool. It allows you to query what
is there, query and capture "oldest resources" [and do a delete/clean], and
download newest, etc.

Some find such a tool useful, I'd like to believe that apache users (admins
and external users) would find it useful. I don't care whose implementation
gets used, I feel that these capabilities are so powerful that they ought
consist of a minimum bar for apache.

Sure, it isn't going to be a 100% generic tool for all cases, but apache is
doing this for apache. Let the tools lead and the users (our own committers)
can chose to follow. Once, along came a browser and sooner or later folks
were converting their documents to HTML 'cos the benefits outweighed the
resistence to change. I'm saying that we can't enforce things, but if we
make the benefits sufficiently worthwhile & transition easy folks then most
folks will follow.

Again, this tool works today on over 95% of the contents of the Maven
repository without any spec. We could achieve this. A nice simple IDE plugin
can update a project and download files with or w/o user intervention, e.g.

> Tim's URI schema supports your operations when combined with with a
> layer, which can be implied or meta-data based.

Aren't you saying that metadata can allow a remote tool to instrospect? Yes,
I agree, this has nothing to do with an unparsable URI scheme. The URI
scheme is generally fine, but if we aren't addressing metadata (almost
impossible) why set back tools that mine metadta from URIs? It works today,
why would we force a step backwards?

[I sometimes feel the acadaemics of the URI Scheme Specifiecation are
outweighing the practicalities of an implementation. I beleive in writting a
specification first, but specifications get revised based upon real world
experience. Tools are that experience.]

> > For me, the strongest argument for tooling (other purely than saving
> admins
> > effort) is download + verify (MD5/whatever).
> That does not require the kind of semantic your earlier operations
> The verification content can be relative to the URI provided to the tool.

True, my bad, I go carried away with my argument, the tool I am familiar
with, and my own dislike of stale software links. You could have the client
tool be told the resource/URI by the user, and do the download/verification,
yes. That said, I don't think it buys the user enough, they have to
browse/locate & stash the URI in some local config. I'd like to say "go get
me xerces from any repository it is in, but get me the latest, but I only
want release not nightly/snapshot" (e.g. That to me, is useful.

I don't mind being alone in my views, but I ask again -- if we don't set the
bar higher than a one-way URI for download, why write a spec at all? I feel
we have the potential to win big, and I'd like the ASF Repository to be a
step forward towards these goals, not a step backward.



Reply via email to