Mike Cross posting to dev since this is partly a dev discussion
It is ready for use IMO but we are still at the SNAPSHOT stage. Whether it is ready for your intended use is really another question? The intention of the project was very much to provide a minimum viable product and then let the community iterate on it. Like all aspects of Jena it will ultimately be community driven, if no-one actively contributes or provides feedback then it will not progress forwards. So it may be that it solves your problem but then again it may not. Right now we are at the point where I think it is ready for wider exposure and testing (in fact I'm presenting about it at ApacheCon EU next month) and start getting more community input. You can find SNAPSHOT releases available via the Apache maven repository - see http://jena.apache.org/getting_involved/index.html for how to configure Maven to use this repository - with the group ID as the usual org.apache.jena and artefact IDs are as follows: - jena-hadoop-rdf-common - jena-hadoop-rdf-io - jena-hadoop-rdf-mapreduce I have been migrating the code into the main git repository this week and will likely merge it into master soon so that it would eventually be included in our next release. Currently it is on the hadoop-rdf branch: https://github.com/apache/jena/tree/hadoop-rdf In terms of productization there are a couple of things that are incomplete as far as an initial "production" release goes: 1 - Configuring the projects to generate javadocs 2 - Writing up user documentation for the website Let us know if you have further questions or need help/advice on whether it can be used to solve a specific problem Rob On 22/10/2014 19:36, "Mike Barretta" <[email protected]> wrote: >I'm looking to incorporate Jena into a MapReduce job and was curious as to >the status/stability of the jena-hadoop-rdf module. Is it ready for use? >If not, is there an expected time when it might be? I've looked at the >Jira project, but there doesn't seem to be an issue directly related to >"productizing" it. > >Thanks, >Mike
