Hi Fabian, for the "temporary import", you could use a separate context. This would clearly separate the data from the rest but still allow to run SPARQL queries over both. But all "normal" queries on the main datastore would also "see" the temporary import.
Another posibility would be to create a new (MemorySail) Repository for the temporary import and then connect it with the the rest of the data using a Sesame Federation [1]. A backdraw here would be that you might loose the native SPARQL support on the database which can improve performance significantly. Best, Jakob [1] http://openrdf.callimachus.net/sesame/2.7/apidocs/org/openrdf/sail/federation/Federation.html On 12 September 2014 09:36, Fabian Cretton <[email protected]> wrote: > Thank you Jacob, > > As I am slowly starting to code that module, I thus have 2 further > questions: > > 1. This import functionality should allow to perform a 2 steps import: > - a first "temporary import" to have look at the data and validate them, > data should not be available in the platform yet > - then a "real import", which correspond to the standard import in the > back-end. > > The "temporary import" could allow SPARQL queries over the data, and also > SPARQL that relies on the temporary data AND on available data in Marmotta. > Nevertheless, this "termporary import" should not be part of the triple > store yet. > If I was using OWLIM/Sesame, I could think about having a temporary > repository to load the temporary data, and then issue SPARQL SERVICE queries > to query both sources. > As Marmotta's back-end is based on Sesame, is that feasible ? > In other words, can Marmotta handle different 'repositories' and if so can > you point to some information ? > > If not, I am currently thinking about other workarounds, like using > rdfstore-js on the client to load the temporary data. > > 2. LDPC and back-end's content > I did read somewhere that the standards triples (not managed by LDP) will > not be available through LDPC, is that correct ? > If so, and if I want my import functionality to be compatible with LDP, what > would you recommand ? > > Also, to better understand LDPC vs named graph, could we say that they are > two different triple 'organisation' levels ? > The named graph making the triple a quad, and the LDPC being actually > information described in RDF (adding other triples to qualify the existing > ones) > > Thank you for your time and help > Fabian > >>>> Jakob Frank <[email protected]> 12/09/14 8:05 >>> > Hi Fabian, > > On 10 September 2014 08:57, Fabian Cretton <[email protected]> wrote: >> My question here is to know if it would be 'odd' to base the 'enhanced' >> import functionality by using LDClients ? >> It seems to me that the current import don't rely on LDClients, but solely >> on "RDFImporterImpl", hence my question. > there is an important conceptual difference between the Import and LDClient: > > Both are adding triples (or quadruples) to the RDF store, but: > * Import is dataset based, i.e. you can import any combination of > resources in one file. > * LDClient is resouces based, i.e. you import data starting from a > known resource (URI) which has to be resolvable via http(s) > > Most usecases of one can be implemented using the other, the overlap > is quite big: You can convert/enrich/ground the data before adding it > to RDF store, create modules for custom formats, etc... > If you have complex constructs with anonymous nodes (BNodes), > importing is the easier way. > > As I said, the distinction is primarily conceptual, and from what I > understood of your use-case the "Import" would be the way to go. > > Best, > Jakob >
