Digression:
Is this the right time to split the Model-API (APIs?) from the core
graph-level machinery into a separate module?
(I don't understand this
question). Are you saying you would like to see all the interfaces and
helper classes in a module and the memory implementation in another? Do we
want to do this? If not, what do you mean?
Claude
I was wondering if modularizing along the lines of a (new) core with
graph-level, including the GraphMem implementation. More modules, with
a pure interface module is also possible but you can't test much without
an implementation and complete mocking is a lot of work, so why not use
one memory implementation as the functional reference impl?
The split then might be (and I haven't tried):
c.h.h.j.graph
c.h.h.j.mem
c.h.h.j.datatypes
and
c.h.h.j.rdf
c.h.h.j.ontology
c.h.h.j.enhanced
(and maybe ARP+xmloutput in their own module)
I'm sure there is entanglement and I'm guessing it's not trivial in
places - I know there is around AnonIds, which I think should be kept in
the RDF API (compatibility) but de-emphasised/deprecated from the Graph
SPI).
The RDF API is not something that seems to be an extension point. The
API/SPI design allows multiple APIs in different styles. I'd love to
see an idiomatic scala API over graph/triple/node. Or clojure. Or a
new Java one (for example , targetting Java8).
So, if that is desirable, how do we make it clean, clear and easy to do
that? One step is being clear-cut about the current RDF API.
Andy