Hi,
I have one urgent problem with Apache Jena, I have posted my question on
StackOverflow
(http://stackoverflow.com/questions/39926809/how-to-set-data-type-for-owl-restriction-using-apache-jena?sem=2).
If someone knows answer, please answer it here or on StackOverflow.
The copied question
Hello, if I we have a model1 (not inference model) and we make changes to
the ontology through that model.
We have an inference model also (InfModel) and changed the ontology after
executing some rules.
Then if we have to write the changes to the disk, should we write both the
models to disk or
I quess sieve is now part of ldif (http://ldif.wbsg.de/) and then there is also
SILK (http://silkframework.org/) that has nice UI for creating transformation
scripts.
- Original Message -
From: "mikael pesonen"
To: "Miika Alonen" ,
Hi,
never would have though this, quite nice way. Those 3 technologies are
new to me and spin especially seems interesting - also in general when
generating SPARQL.
So lot to learn...
Thanks!
Mikael
On 7.10.2016 12:39, Miika Alonen wrote:
Hi,
You could resolve "schemas" to separate
Not a question but a red faced admission.
String serviceURI = "http://wheremydatawas/ds/data;;
DatasetAccessor accessor;
accessor = DatasetAccessorFactory.createHTTP(serviceURI);
System.out.println("Has http://example/update-base/EQ --> " +
Hi,
You could resolve "schemas" to separate graphs (here ?model) and change the
datatypes using sparql.
DELETE { ?s ?p ?o }
INSERT { ?s ?p ?o2 }
WHERE
{
GRAPH ?data {
?s ?p ?o .
FILTER(isLiteral(?o) && lang(?o) = "")
BIND(STRDT(STR(?o), ?range) AS ?o2)
}
GRAPH ?model {
?p a
Hi,
Im using php at the moment to store data to Jena. Data comes in in
various formats and my code generates update SPARQL from that.
Anyone know if there is some library or code that could format the
values based on schema?
For example I would load Dublin Core schema and library would
Granatum http://www.fit.fraunhofer.de/en/fb/cscw/projects/granatum.html is
a Cancer research project that uses Jena to query across multiple databases
with differing vocabularies. A custom query engine built on Jena does
automatic query translation and partitioning for each endpoint as necessary.