A couple more points I thought of:

There is another implementation that uses JSR 170 so it would be very hard to have any level of interoperability without an API. I certainly don't want that being RDF.

There are plenty of good tools that can be leveraged, for example I use Lucene because it's easy to do incremental updates because it can easily treat many disparate files as a single logical index. I don't want to be force, and I don't want anyone else to be forced, to use RDF directly.

For any sort of distribution things like Terracotta can be leveraged, or you might use a tool like Hadoop.

The only unifier for all these tools that have cropped up to date is a good API.

I'll wait for you to answer but I really hope you didn't bind NMaven to RDF.

On 1 Aug 07, at 1:17 PM 1 Aug 07, Jason van Zyl wrote:


On 1 Aug 07, at 12:25 AM 1 Aug 07, Shane Isbell wrote:

I would like to see if there is any general interest from the Maven
community in using RDF for storing and retrieving of repository information.

As the only means, and not accessed via some API shielding the underlying store then my vote will always be -1. I hope that's not what you've done with NMaven as that would be a fatal flaw. I'm assuming there is some API sitting on top of it.

I switched NMaven's resolver implementation to one using RDF and am happy with the results. This implementation allows: 1) easily extending meta-data,

Which I'm always skeptical of as we potentially wind up which schisms and I'm not sure what kind of extension you need for purely dependency information which the resolver is concerned with.

in my case for attaching requirements to an artifact; 2) writing queries against the repo, as opposed to reading and manipulating the hierarchy of
poms. This also results in cleaner, simpler code;

This should be done with an API, not against a fixed datastore. Using RDF and only RDF is not something I would ever support because I know of two implementations of repository managers that use their own internal format. Proximity uses one and I use Lucene indices so the unifier will be an API.

3) exporting all the
meta-data to a single RDF/XML file, which has helped me with debugging and understanding the entire repository. A future benefit would be the ability
to run distributed queries across multiple repos.

It's sounding like you've build this on RDF which I don't think is wise at all. For example this is not hard with any underlying store with the right API. I don't think it would be hard for you to use an API though. I'll never support a single storage format that isn't accessed via an API.


One of the implications is that while the pom is still used for building, the local and remote repositories would just contain RDF/XML files: this would, of course, be a big change. I would just like to run this idea by the general group; if there is enough interest, I would be happy to do some prototype work to see how RDF would work within Maven core. You can look at
one of NMaven's core classes that used RDF here:
https://svn.apache.org/repos/asf/incubator/nmaven/trunk/components/ dotnet-dao/project/src/main/java/org/apache/maven/dotnet/dao/impl/


As a backing store for a rework of the artifact API sure, as the primary means. I'd never support that myself.

Regards,
Shane

Thanks,

Jason

----------------------------------------------------------
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
----------------------------------------------------------




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]


Thanks,

Jason

----------------------------------------------------------
Jason van Zyl
Founder and PMC Chair, Apache Maven
jason at sonatype dot com
----------------------------------------------------------




---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to