good evening;

> Begin forwarded message:
> 
> From: "Laura Morales" <[email protected]>
> Subject: Does anybody here understand "Linked Data Fragments"?
> Date: 2017-05-06 at 15:31:17 GMT+2
> To: jena-users-ml <[email protected]>
> Reply-To: [email protected]
> 
> As far as I understand them, clients are supposed to submit only simple 
> queries to servers in order to retrieve subsets of the data. Queries like 
> "?subject <ns:property> ?object", and then download this snapshot (subset) of 
> the data; in this case the list of triples for the propery <ns:property>. The 
> data is downloaded locally, and then the client can issue SPARQL queries on 
> the local copy of the data just downloaded.

in ruben verborgh &co’s approach - at the location you noted, the immediate 
“client” is itself a sparql processor - in the gent group’s case a javascript 
process, and the application uses that processor as in in-process sparql engine.

> Is this correct? Is this how "Linked Data Fragments" work?

the sparql aspect is layered on top of the ldf aspect.
that is one way they are used, but they are useful on their own - for example 
to back an object model.

> Doesn't this generate a lot of traffic, a lot of data moving around, and very 
> little improvement over using a local SPARQL endpoint?

it depends on ones goals.
last i read their papers, the numbers spoke for themselves, but the autonomy 
benefits outweighed the traffic and the performance
> 
> I really can't understand the benefits. Haaaalp!

they, years ago, recorded statistics which “demonstrated” the instability of 
sparql endpoints, suggest the conclusion that a sparql endpoint was too 
difficult, too resource intensive, too failure-prone, too (fill in the blank) 
to be relied on as a component in a robust semantic data processing stack and 
ventured off down the path of implementing their own sparql processor with 
linked data fragments as a remote protocol to accomplish the simple-entailment 
statement pattern matching which any sparql processor requires.

i enquired once, why the intention to have a reliable endpoint was not a 
dimension in the statistics which they collected and whether that would have 
changed the conclusion, but have never gotten a thorough answer.

they have, instead, established a ldf source for datasets as their 
demonstration that such a source should be easier to field. (the lod laundromat)
statistics on a wider range of such ldf servers - including the class parties 
who ran the unreliable sparql endpoints, would be interesting.

> 
> For reference: http://linkeddatafragments.org

best regards, from berlin,



---
james anderson | [email protected] | http://dydra.com





Reply via email to