[3.0.1] listSuperClasses() does not traverse?

2017-10-24 Thread Martynas Jusevičius
Hi,

I thought I understood how OntClass.listSuperClasses() works, but maybe I
don't.

I have such a class structure in my ontology (superclass is at the top):

3. https://www.w3.org/ns/ldt/document-hierarchy/domain#Item
2. http://atomgraph.com/ns/platform/domain#Item
1. https://localhost/admin/ns#AgentItem

Yet when I'm debugging, I can see the pair-wise relationships, but not the
chain all the way up from 1 to 3:

1. getOntology().getOntModel().getOntClass("
https://localhost/admin/ns#AgentItem
").listSuperClasses(false).toList().toString()

[https://localhost/admin/ns#ItemOfAgentContainer,
http://atomgraph.com/ns/platform/domain#Item]

2. getOntology().getOntModel().getOntClass("
http://atomgraph.com/ns/platform/domain#Item
").listSuperClasses(false).toList().toString()

[https://www.w3.org/ns/ldt/document-hierarchy/domain#Item]

I can see that within the method hasPropertyValue(
getProfile().SUB_CLASS_OF(), "SUB_CLASS_OF", cls ) returns false.

Why is that so? Is my usage wrong?

Additional info:
getOntology().getOntModel().getSpecification().getProfile() == OWLProfile
getOntology().getOntModel().getSpecification().getReasoner() == null


Martynas


Re: Does order of JOIN clauses affect performance

2017-10-24 Thread Martynas Jusevičius
Maybe this article can help:
http://wwwconference.org/www2008/papers/pdf/p595-stocker1.pdf

It's about BGPs not graph patterns, but I guess selectivity still applies.

On Wed, 25 Oct 2017 at 03.17, Dimov, Stefan  wrote:

> Let’s consider the following JOIN:
>
> SELECT $subj
> FROM NAMED ng1
> FROM NAMED ng2
> {
> GRAPH ng1  { $subj  $pred0  $obj0 }
> GRAPH ng2  { $subj  $pred1  $obj1 }
> }
>
> My question is: Does the order of the clauses affect the performance?
> Let’s say that ng1 is much bigger than ng2. If SPARQL applies the first
> clause first and then the second clause over the set result of the first
> one, then it makes sense to exchange their places.
>
> Is that right?
>
> Regards,
> Stefan
>


Re: [3.0.1] listSuperClasses() does not traverse?

2017-10-25 Thread Martynas Jusevičius
Thanks Dave.

We are materializing inferences during ontology initialization to avoid
using reasoner subsequently (as it impacts performance).

So in that case I need to traverse the chain myself, correct?

On Wed, Oct 25, 2017 at 9:33 AM, Dave Reynolds 
wrote:

> On 24/10/17 23:51, Martynas Jusevičius wrote:
>
>> Hi,
>>
>> I thought I understood how OntClass.listSuperClasses() works, but maybe I
>> don't.
>>
>> I have such a class structure in my ontology (superclass is at the top):
>>
>> 3. https://www.w3.org/ns/ldt/document-hierarchy/domain#Item
>>  2. http://atomgraph.com/ns/platform/domain#Item
>>  1. https://localhost/admin/ns#AgentItem
>>
>> Yet when I'm debugging, I can see the pair-wise relationships, but not the
>> chain all the way up from 1 to 3:
>>
>> 1. getOntology().getOntModel().getOntClass("
>> https://localhost/admin/ns#AgentItem
>> ").listSuperClasses(false).toList().toString()
>>
>> [https://localhost/admin/ns#ItemOfAgentContainer,
>> http://atomgraph.com/ns/platform/domain#Item]
>>
>> 2. getOntology().getOntModel().getOntClass("
>> http://atomgraph.com/ns/platform/domain#Item
>> ").listSuperClasses(false).toList().toString()
>>
>> [https://www.w3.org/ns/ldt/document-hierarchy/domain#Item]
>>
>> I can see that within the method hasPropertyValue(
>> getProfile().SUB_CLASS_OF(), "SUB_CLASS_OF", cls ) returns false.
>>
>> Why is that so? Is my usage wrong?
>>
>> Additional info:
>> getOntology().getOntModel().getSpecification().getProfile() == OWLProfile
>> getOntology().getOntModel().getSpecification().getReasoner() == null
>>
>>
> If I recall correctly the OntModel API is designed to retrieve whatever is
> stated within the underlying model. The notion was that there was no point
> in having the OntModel API replicate what reasoning does.
>
> So to see the subclass closure you need to have a sufficient reasoner
> configured.
>
>
> Dave
>


Re: [3.0.1] listSuperClasses() does not traverse?

2017-10-25 Thread Martynas Jusevičius
Dave,

I think I get it now. As you mention, we do not include subClassOf closure
during materialization, as the size grows but most of the inferences are
irrelevant.

So we only have direct relationships in the data but are in fact looking
for an inferred one, which is not there.

On Wed, Oct 25, 2017 at 12:36 PM, Dave Reynolds 
wrote:

> On 25/10/17 11:10, George News wrote:
>
>> On 2017-10-25 11:54, Dave Reynolds wrote:
>>
>>> Hi Martynas,
>>>
>>> On 25/10/17 10:33, Martynas Jusevičius wrote:
>>>
>>>> Thanks Dave.
>>>>
>>>> We are materializing inferences during ontology initialization to avoid
>>>> using reasoner subsequently (as it impacts performance).
>>>>
>>>
>>> Makes sense.
>>>
>>> So in that case I need to traverse the chain myself, correct?
>>>>
>>>
>>> Not if you've materialized the inferences. If you have constructed the
>>> superClass closure as part of this materialization then the closure
>>> should be visible through the OntAPI.
>>>
>>> If you haven't included that in your materialization then indeed you
>>> would need to traverse the chain yourself - either in the API or via
>>> SPARQL property paths.
>>>
>>
>> What do you mean by materialize the inferences of subClass?
>>
>
> That's not quite the way I phrased it.
>
> All I meant was that the reasoners will in effect compute
>
> (?a rdfs:subClassOf ?b) (?b rdfs:subClassOf ?c)
>  -> (?a rdfs:subClassOf ?c)
>
> [Though technically the builtin reasoners don't use rules for that.]
>
> So if Martynas wants the OntClass.listSuperClasses query to work as he
> expected then he would need to include that in the materialization.
>
> So, as you say, it would include things like:
>
> ClassChild rdf:subClassOf ClassParent
> ClassChildChild rdf:subClassOf ClassParent
> ClassChildChild rdf:subClassOf ClassChild
>
> It is clear
>> that you include the inferences for the individuals, like:
>>
>> individual rdf:type ClassParent
>> individual rdf:type ClassChild
>> individual rdf:type ClassChildChild
>>
>> But if I also include the materialization for the class definition, at
>> the end, I'm including the "full" ontology model.
>>
>> ClassChild rdf:subClassOf ClassParent
>> ClassChildChild rdf:subClassOf ClassParent
>> ClassChildChild rdf:subClassOf ClassChild
>>
>> Do you recommend to also include the second step?
>>
> To use the standard refrain "it depends what you are specifically trying
> to do".
>
> The benefit is that you can get all superclasses with a simple query. The
> cost, apart from some size growth, is that finding direct superclasses
> becomes very painful. Which is why the built in reasoners have the support
> for "direct" versions which is then exposed in the OntAPI via all the
> "direct" flags. That distinction can get lost in the materialization.
>
> Personally I would not include the subClassOf closure if materializing but
> would rely on query rewriting to such questions on demand. YMMV
>
> Dave
>
>
>> I'm involved in a similar procedure as Martynas.
>>
>> Thanks
>> Jorge
>>
>>
>>
>> Dave
>>>
>>>
>>> On Wed, Oct 25, 2017 at 9:33 AM, Dave Reynolds
>>>> 
>>>> wrote:
>>>>
>>>> On 24/10/17 23:51, Martynas Jusevičius wrote:
>>>>>
>>>>> Hi,
>>>>>>
>>>>>> I thought I understood how OntClass.listSuperClasses() works, but
>>>>>> maybe I
>>>>>> don't.
>>>>>>
>>>>>> I have such a class structure in my ontology (superclass is at the
>>>>>> top):
>>>>>>
>>>>>> 3. https://www.w3.org/ns/ldt/document-hierarchy/domain#Item
>>>>>>2. http://atomgraph.com/ns/platform/domain#Item
>>>>>>1. https://localhost/admin/ns#AgentItem
>>>>>>
>>>>>> Yet when I'm debugging, I can see the pair-wise relationships, but
>>>>>> not the
>>>>>> chain all the way up from 1 to 3:
>>>>>>
>>>>>> 1. getOntology().getOntModel().getOntClass("
>>>>>> https://localhost/admin/ns#AgentItem
>>>>>> ").listSuperClasses(false).toList().toString()
>>>>>>
>>>>>> [https://localhost/admin/ns#ItemOfAgentContainer,
>>>>>> http://atomgraph.com/ns/platform/domain#Item]
>>>>>>
>>>>>> 2. getOntology().getOntModel().getOntClass("
>>>>>> http://atomgraph.com/ns/platform/domain#Item
>>>>>> ").listSuperClasses(false).toList().toString()
>>>>>>
>>>>>> [https://www.w3.org/ns/ldt/document-hierarchy/domain#Item]
>>>>>>
>>>>>> I can see that within the method hasPropertyValue(
>>>>>> getProfile().SUB_CLASS_OF(), "SUB_CLASS_OF", cls ) returns false.
>>>>>>
>>>>>> Why is that so? Is my usage wrong?
>>>>>>
>>>>>> Additional info:
>>>>>> getOntology().getOntModel().getSpecification().getProfile() ==
>>>>>> OWLProfile
>>>>>> getOntology().getOntModel().getSpecification().getReasoner() == null
>>>>>>
>>>>>>
>>>>>> If I recall correctly the OntModel API is designed to retrieve
>>>>> whatever is
>>>>> stated within the underlying model. The notion was that there was no
>>>>> point
>>>>> in having the OntModel API replicate what reasoning does.
>>>>>
>>>>> So to see the subclass closure you need to have a sufficient reasoner
>>>>> configured.
>>>>>
>>>>>
>>>>> Dave
>>>>>
>>>>>
>>>>
>>>


Re: [3.0.1] listSuperClasses() does not traverse?

2017-10-25 Thread Martynas Jusevičius
I ended up using JenaUtil.getAllSuperClasses() from SPINRDF:
https://github.com/spinrdf/spinrdf/blob/a7fc9a1ca7badecb9a8d858f7a8a33bb106e629f/src/main/java/org/spinrdf/util/JenaUtil.java#L401

On Wed, Oct 25, 2017 at 1:22 PM, Martynas Jusevičius  wrote:

> Dave,
>
> I think I get it now. As you mention, we do not include subClassOf closure
> during materialization, as the size grows but most of the inferences are
> irrelevant.
>
> So we only have direct relationships in the data but are in fact looking
> for an inferred one, which is not there.
>
> On Wed, Oct 25, 2017 at 12:36 PM, Dave Reynolds  > wrote:
>
>> On 25/10/17 11:10, George News wrote:
>>
>>> On 2017-10-25 11:54, Dave Reynolds wrote:
>>>
>>>> Hi Martynas,
>>>>
>>>> On 25/10/17 10:33, Martynas Jusevičius wrote:
>>>>
>>>>> Thanks Dave.
>>>>>
>>>>> We are materializing inferences during ontology initialization to avoid
>>>>> using reasoner subsequently (as it impacts performance).
>>>>>
>>>>
>>>> Makes sense.
>>>>
>>>> So in that case I need to traverse the chain myself, correct?
>>>>>
>>>>
>>>> Not if you've materialized the inferences. If you have constructed the
>>>> superClass closure as part of this materialization then the closure
>>>> should be visible through the OntAPI.
>>>>
>>>> If you haven't included that in your materialization then indeed you
>>>> would need to traverse the chain yourself - either in the API or via
>>>> SPARQL property paths.
>>>>
>>>
>>> What do you mean by materialize the inferences of subClass?
>>>
>>
>> That's not quite the way I phrased it.
>>
>> All I meant was that the reasoners will in effect compute
>>
>> (?a rdfs:subClassOf ?b) (?b rdfs:subClassOf ?c)
>>  -> (?a rdfs:subClassOf ?c)
>>
>> [Though technically the builtin reasoners don't use rules for that.]
>>
>> So if Martynas wants the OntClass.listSuperClasses query to work as he
>> expected then he would need to include that in the materialization.
>>
>> So, as you say, it would include things like:
>>
>> ClassChild rdf:subClassOf ClassParent
>> ClassChildChild rdf:subClassOf ClassParent
>> ClassChildChild rdf:subClassOf ClassChild
>>
>> It is clear
>>> that you include the inferences for the individuals, like:
>>>
>>> individual rdf:type ClassParent
>>> individual rdf:type ClassChild
>>> individual rdf:type ClassChildChild
>>>
>>> But if I also include the materialization for the class definition, at
>>> the end, I'm including the "full" ontology model.
>>>
>>> ClassChild rdf:subClassOf ClassParent
>>> ClassChildChild rdf:subClassOf ClassParent
>>> ClassChildChild rdf:subClassOf ClassChild
>>>
>>> Do you recommend to also include the second step?
>>>
>> To use the standard refrain "it depends what you are specifically trying
>> to do".
>>
>> The benefit is that you can get all superclasses with a simple query. The
>> cost, apart from some size growth, is that finding direct superclasses
>> becomes very painful. Which is why the built in reasoners have the support
>> for "direct" versions which is then exposed in the OntAPI via all the
>> "direct" flags. That distinction can get lost in the materialization.
>>
>> Personally I would not include the subClassOf closure if materializing
>> but would rely on query rewriting to such questions on demand. YMMV
>>
>> Dave
>>
>>
>>> I'm involved in a similar procedure as Martynas.
>>>
>>> Thanks
>>> Jorge
>>>
>>>
>>>
>>> Dave
>>>>
>>>>
>>>> On Wed, Oct 25, 2017 at 9:33 AM, Dave Reynolds
>>>>> 
>>>>> wrote:
>>>>>
>>>>> On 24/10/17 23:51, Martynas Jusevičius wrote:
>>>>>>
>>>>>> Hi,
>>>>>>>
>>>>>>> I thought I understood how OntClass.listSuperClasses() works, but
>>>>>>> maybe I
>>>>>>> don't.
>>>>>>>
>>>>>>> I have such a class structure in my ontology (superclass is at the
>>>>>>> top):
>>>>>>>
>>>>>>> 3. https://www.w3.org/ns/l

Re: distinct in SPARQL group_concat

2017-11-07 Thread Martynas Jusevičius
I think that literals with the same value but different language tags ar
not identical.

On Tue, Nov 7, 2017 at 3:34 PM, Mikael Pesonen 
wrote:

>
> Hi,
>
> in FOAF schema there are values
>
> rdfs:label  ww.w3.org%2F2000%2F01%2Frdf-schema%23label>
> Agent
> rdfs:label  ww.w3.org%2F2000%2F01%2Frdf-schema%23label>
> Agent _en-US
>
> When making query
>
> SELECT (group_concat(distinct ?o_label_g; separator=", ") as ?o_label)
> WHERE
> ...
> ?o rdfs:label ?o_label_g
> ...
>
> result for ?o_label is
>
> "Agent, Agent"
>
> and not "Agent". Is that how it should work, or should DISTINCT work with
> language labels stripped out of the values?
>
> If this works how it should, is there an easy way to remove duplicates?
>
> Br,
> Mikael
>
> --
> Lingsoft - 30 years of Leading Language Management
>
> www.lingsoft.fi
>
> Speech Applications - Language Management - Translation - Reader's and
> Writer's Tools - Text Tools - E-books and M-books
>
> Mikael Pesonen
> System Engineer
>
> e-mail: mikael.peso...@lingsoft.fi
> Tel. +358 2 279 3300
>
> Time zone: GMT+2
>
> Helsinki Office
> Eteläranta 10
> 
> FI-00130 Helsinki
> FINLAND
>
> Turku Office
> Kauppiaskatu 5
>  A
> FI-20100 Turku
> FINLAND
>
>


Re: Loading dataset with relative IRIs

2017-11-21 Thread Martynas Jusevičius
You cannot. RDF data model is based on absolute URIs.

On Tue, Nov 21, 2017 at 3:35 PM, Mohammad Noorani Bakerally <
noorani.bakera...@gmail.com> wrote:

> I have a dataset in a trig file with resources having relative iris, when
> loading them with the method RDFDataMgr.loadDataset, all the relative IRIs
> are converted to absolute iris, how can I prevent this, I want them to
> retain their relative IRIs ? ‌
>


Re: Loading dataset with relative IRIs

2017-11-21 Thread Martynas Jusevičius
Yes. Did you look at the JavaDoc?
https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/riot/RDFDataMgr.html#read-org.apache.jena.query.Dataset-java.lang.String-java.lang.String-org.apache.jena.riot.Lang-

On Tue, Nov 21, 2017 at 3:54 PM, Mohammad Noorani Bakerally <
noorani.bakera...@gmail.com> wrote:

> is it possible to explicitly set a base when loading the dataset ?
>
>
>
> ‌
>
> On Tue, Nov 21, 2017 at 3:46 PM, Martynas Jusevičius <
> marty...@atomgraph.com
> > wrote:
>
> > You cannot. RDF data model is based on absolute URIs.
> >
> > On Tue, Nov 21, 2017 at 3:35 PM, Mohammad Noorani Bakerally <
> > noorani.bakera...@gmail.com> wrote:
> >
> > > I have a dataset in a trig file with resources having relative iris,
> when
> > > loading them with the method RDFDataMgr.loadDataset, all the relative
> > IRIs
> > > are converted to absolute iris, how can I prevent this, I want them to
> > > retain their relative IRIs ? ‌
> > >
> >
>


Re: problem with VALUES querybuilder

2017-11-23 Thread Martynas Jusevičius
You should start a new thread instead of adding to an unrelated one.

On Thu, Nov 23, 2017 at 2:20 AM, Neda Alipanah 
wrote:

> Hello there,
> I have a quick question. I am loading a 25 Meg Owl file to the memory
> using the following commands. My code is working fine through the
> IDE(IntelliJ), but when I create a runnable Jar, it does not find the file.
> I already put the owl file directory in the class path but I get Exception
> In Initializer error.
>
> // Create an empty model
> model = ModelFactory.createOntologyModel(OntModelSpec.OWL_DL_MEM);
>
> // Use the FileManager to find the input file
> InputStream in = FileManager.get().open(inPath);
>
> [image: Inline image 1]
>
> Really appreciate if you can provide a solution for the problem.
>
>
> Best Regards,
>
> Neda
>
>
> On Tue, Nov 21, 2017 at 6:52 AM, Andy Seaborne  wrote:
>
>> Yes, there is a difference.
>>
>> It (the join) happens just before project and after any GROUP BY.
>>
>> See the algebra at http://www.sparql.org/query-validator.html
>>
>> Andy
>>
>>
>> On 21/11/17 14:46, Claude Warren wrote:
>>
>>> based on https://www.w3.org/TR/sparql11-query/#inline-data-examples
>>>
>>> there is no difference between values  blocks inside or outside a graph
>>> pattern.
>>>
>>> On Tue, Nov 21, 2017 at 2:35 PM, Claude Warren  wrote:
>>>
>>> Currently the values are always placed in the top level of the query.

 Q: does it make a difference to exeuction?  (I suspect it does but I
 want
 to make sure before I proceed to add a method to place it inside the
 graph
 pattern.

 Claude

 On Tue, Nov 21, 2017 at 1:20 PM, Rob Vesse 
 wrote:

 The output you get is syntactically valid - VALUES is allowed at the top
> level of the query as well as within graph patterns
>
>   It is not clear to me if the latter this Is actually possible with
> the
> current query builder, Claude can probably give you a more detailed
> answer
>
> Rob
>
>
> On 21/11/2017, 12:05, "Chris Dollin" 
> wrote:
>
>  Dear All
>
>  I'm missing something with use of the query builder to create
> VALUES
>  clauses.
>  The code
>
>  @Test public void buildValues() {
>  SelectBuilder sb = new SelectBuilder();
>  sb.addValueVar("item",  "spoo", "flarn");
>  System.err.println(sb.buildString());
>  }
>
>  generates
>
>SELECT  *
>WHERE
>  {  }
>VALUES ?item { "spoo" "flarn" }
>
>  which I believe to be syntactically incorrect but in any case I
> want
> the
>  generated VALUES clause to be inside the WHERE {} ie
>
>SELECT * WHERE {VALUES ?item {"spoo" "flarn"}}}
>
>  What should I be doing and how should I have known that?
>
>  Chris
>
>  PS please to excuse the misuse of @Test here ... exploratory use
> only.
>
>  --
>  "What I don't understand is this ..."   Trevor Chaplin, /The
> Beiderbeck
>  Affair/
>
>  Epimorphics Ltd, http://www.epimorphics.com
>  Registered address: Court Lodge, 105 High Street, Portishead,
> Bristol
> 
> BS20
>  6PT
>  Epimorphics Ltd. is a limited company registered in England
> (number
> 7016688)
>
>
>
>
>
>
>

 --
 I like: Like Like - The likeliest place on the web
 
 LinkedIn: http://www.linkedin.com/in/claudewarren


>>>
>>>
>>>
>


Finding (sub)class instance in raw Model

2017-12-02 Thread Martynas Jusevičius
Hi,

what would be the most effective way to find a Resource that
- belongs to a raw Model
- in an instance of class X in an InfModel (which is based on the Model
above)
?

I came up with something which iterates instances and filters by membership
(lambda syntax not tested):

infModel.listSubjectsWithProperty(RDF.type, class).
filterKeep(r -> infModel.getRawModel().containsResource(r))

If a lot of instances were inferred in the InfModel, I can see this being
slow.

Is there a better way? Maybe inversing the iterations somehow?


Martynas


Re: Finding (sub)class instance in raw Model

2017-12-02 Thread Martynas Jusevičius
Possible that there are multiple matches, but unlikely :) In that case I
just need any one of them.

InfModel is only used for this case. I don't see how I can avoid it. If say
I want to find an instance of foaf:Document or its subclasses, just looking
at the raw model such as

:instance a my:Class .

is not enough, because it doesn't know whether my:Class is a subclass of
foaf:Document or not, without adding schema. That is why I'm using InfModel.

On Sat, Dec 2, 2017 at 6:31 PM, ajs6f  wrote:

> Is it possible in your use case that there is more than one of these guys,
> or do you have reason to assume that if there is one, it is unique? If
> there is more than one, do you need to produce all examples? Is this the
> only use of the inference model (IOW could you do the inference in a
> different way) or do you need other inference facilities and therefore the
> InfModel is really required?
>
> ajs6f
>
> > On Dec 2, 2017, at 12:27 PM, Martynas Jusevičius 
> wrote:
> >
> > Hi,
> >
> > what would be the most effective way to find a Resource that
> > - belongs to a raw Model
> > - in an instance of class X in an InfModel (which is based on the Model
> > above)
> > ?
> >
> > I came up with something which iterates instances and filters by
> membership
> > (lambda syntax not tested):
> >
> > infModel.listSubjectsWithProperty(RDF.type, class).
> >filterKeep(r -> infModel.getRawModel().containsResource(r))
> >
> > If a lot of instances were inferred in the InfModel, I can see this being
> > slow.
> >
> > Is there a better way? Maybe inversing the iterations somehow?
> >
> >
> > Martynas
>
>


Overriding JsonLdOptions / disabling HTTP call

2017-12-03 Thread Martynas Jusevičius
Hi,

I have noticed that after I upgraded to Jena 3.0.1 some time ago, JSON-LD
writer stopped working. This is due to

Exception occurred in target VM:
org/apache/http/impl/client/SystemDefaultHttpClient
java.lang.NoClassDefFoundError:
org/apache/http/impl/client/SystemDefaultHttpClient
at com.github.jsonldjava.core.JsonLdOptions.(JsonLdOptions.java:52)

in JsonLDWriter.serialize().

I had replaced all Jena's Apache Client usages with Jersey client, so this
was an unpleasant surprise.

Why would a writer make HTTP calls? I guess it attempts to load some
@context or something, but this does not make sense when writing?

I know jsonld-java is a 3rd party library, so I guess my options to disable
the HTTP call are limited? Would a newer version make it possible?


Martynas


Re: Overriding JsonLdOptions / disabling HTTP call

2017-12-03 Thread Martynas Jusevičius
Follows below.

Forgot to mention one important thing :) I'm excluding Apache HTTP Client
from Jena:


org.apache.jena
jena-arq
3.0.1



org.apache.httpcomponents
httpclient


org.apache.httpcomponents
httpclient-cache




So I guess it's really a classpath issue, not specific to Jena: Jersey
client wraps a different AHC version than Jena/jsonld-java does.

But if JsonLdOptions would not call it, I think it wouldn't be a problem?

The stacktrace:

Exception occurred in target VM:
org/apache/http/impl/client/SystemDefaultHttpClient
java.lang.NoClassDefFoundError:
org/apache/http/impl/client/SystemDefaultHttpClient
at com.github.jsonldjava.core.JsonLdOptions.(JsonLdOptions.java:52)
at org.apache.jena.riot.out.JsonLDWriter.serialize(JsonLDWriter.java:87)
at org.apache.jena.riot.out.JsonLDWriter.write(JsonLDWriter.java:67)
at org.apache.jena.riot.out.JsonLDWriter.write(JsonLDWriter.java:77)
at org.apache.jena.riot.system.RiotLib$WriterAdapter.write(RiotLib.java:333)
at org.apache.jena.riot.adapters.RDFWriterRIOT.write(RDFWriterRIOT.java:94)
at org.apache.jena.rdf.model.impl.ModelCom.write(ModelCom.java:355)
at com.atomgraph.core.io.ModelProvider.write(ModelProvider.java:153)
at
com.atomgraph.platform.server.io.SkolemizingModelProvider.write(SkolemizingModelProvider.java:85)
at
com.atomgraph.server.io.BasedModelProvider.writeTo(BasedModelProvider.java:83)
at
com.atomgraph.server.io.BasedModelProvider.writeTo(BasedModelProvider.java:43)
at
com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:302)
at
com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1510)
at
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at
com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at
com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at
com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:292)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:240)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:207)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:212)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:94)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:616)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:141)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79)
at
org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:620)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:502)
at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1132)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:684)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1539)
at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1495)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException:
org.apache.http.impl.client.SystemDefaultHttpClient
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1333)
at
org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1167)
... 40 more


On Sun, Dec 3, 2017 at 7:40 PM, ajs6f  wrote:

> Can you give a complete stack trace?
>
> ajs6f
>
> > On Dec 3, 2017, at 1:27 PM, Martynas Jusevičius 
> wrote:
> >
> > Hi,
> >
> > I have noticed that after I upgraded to Jena 3.0.1 some time ago, JSON-LD
> > writer stopped working. This is due to
> >
> > Exception occurred in target VM:
> > org/apache/http/impl/client/SystemD

Specifying Dataset as in SPARQL Protocol

2017-12-27 Thread Martynas Jusevičius
Hey,

I am implementing some of the last bits in the SPARQL Protocol,
namely 2.1.4 Specifying an RDF Dataset and 2.2.3 Specifying an RDF Dataset.

Would the following be a correct interpretation given a default Jena
Dataset?

public Dataset specifyDataset(Dataset defaultDataset, List
defaultGraphUris, List namedGraphUris)
{
if (!defaultGraphUris.isEmpty() || !namedGraphUris.isEmpty())
{
Dataset specified = DatasetFactory.create();

for (URI defaultGraphUri : defaultGraphUris)

specified.getDefaultModel().add(defaultDataset.getNamedModel(defaultGraphUri.toString()));
for (URI namedGraphUri : namedGraphUris)
specified.addNamedModel(namedGraphUri.toString(),
defaultDataset.getNamedModel(namedGraphUri.toString()));

return specified;
}

return defaultDataset;
}

I assume there should be some similar code in Fuseki somewhere.

Thanks.


Martynas


Re: Specifying Dataset as in SPARQL Protocol

2017-12-27 Thread Martynas Jusevičius
Thank you!

On Wed, Dec 27, 2017 at 9:58 PM, Andy Seaborne  wrote:

> Look at
>   class DynamicDatasets
>   use of DatasetDescription
>   SPARQL_Query.decideDataset
>
> Andy
>
>
> On 27/12/17 20:37, Martynas Jusevičius wrote:
>
>> Hey,
>>
>> I am implementing some of the last bits in the SPARQL Protocol,
>> namely 2.1.4 Specifying an RDF Dataset and 2.2.3 Specifying an RDF
>> Dataset.
>>
>> Would the following be a correct interpretation given a default Jena
>> Dataset?
>>
>>  public Dataset specifyDataset(Dataset defaultDataset, List
>> defaultGraphUris, List namedGraphUris)
>>  {
>>  if (!defaultGraphUris.isEmpty() || !namedGraphUris.isEmpty())
>>  {
>>  Dataset specified = DatasetFactory.create();
>>
>>  for (URI defaultGraphUri : defaultGraphUris)
>>
>> specified.getDefaultModel().add(defaultDataset.getNamedModel
>> (defaultGraphUri.toString()));
>>  for (URI namedGraphUri : namedGraphUris)
>>  specified.addNamedModel(namedGraphUri.toString(),
>> defaultDataset.getNamedModel(namedGraphUri.toString()));
>>
>>  return specified;
>>  }
>>
>>  return defaultDataset;
>>  }
>>
>> I assume there should be some similar code in Fuseki somewhere.
>>
>> Thanks.
>>
>>
>> Martynas
>>
>>


[3.0.1] Models (not) isomorphic

2018-01-02 Thread Martynas Jusevičius
Hi,

I am writing a unit test for an RDF parser:
https://github.com/AtomGraph/Core/blob/master/src/test/java/com/atomgraph/core/riot/lang/RDFPostReaderTest.java

I have constructed two Models which only differ in blank nodes,
yet wanted.isIsomorphicWith(got) returns false.

Here are the wanted.toString() and got.toString() output:

http://subject2 @http://predicate3 "literal1"; http://subject3
@http://predicate4 "literal2"@da; b1 @http://rdf.org/#rest
http://rdf.org/#nil; b1 @http://rdf.org/#first http://something/;
http://subject1 @http://predicate2 http://object3; http://subject1 @
http://predicate2 http://object2; http://subject1 @http://predicate1
http://object1; http://subject1 @http://dc.org/#title "title"@da;
http://subject4 @http://dct.org/#hasPart b1; http://subject4 @
http://predicate5 "literal3"^^http://type} |  [http://subject2,
http://predicate3, "literal1"] [http://subject3, http://predicate4,
"literal2"@da] [b1, http://rdf.org/#rest, http://rdf.org/#nil] [b1,
http://rdf.org/#first, http://something/] [http://subject1,
http://predicate2, http://object3] [http://subject1, http://predicate2,
http://object2] [http://subject1, http://predicate1, http://object1] [
http://subject1, http://dc.org/#title, "title"@da] [http://subject4,
http://dct.org/#hasPart, b1] [http://subject4, http://predicate5,
"literal3"^^http://type]>

http://subject2 @http://predicate3 "literal1"; http://subject3
@http://predicate4 "literal2"@da; http://subject1 @http://predicate2
http://object3; http://subject1 @http://predicate2 http://object2;
http://subject1 @http://predicate1 http://object1; http://subject1 @
http://dc.org/#title "title"@da; 9ee64f4ba50df2bb8b28a1a473990da2 @
http://rdf.org/#rest http://rdf.org/#nil; 9ee64f4ba50df2bb8b28a1a473990da2 @
http://rdf.org/#first http://something/; http://subject4 @
http://dct.org/#hasPart 9ee64f4ba50df2bb8b28a1a473990da2; http://subject4 @
http://predicate5 "literal3"^^http://type} |  [http://subject2,
http://predicate3, "literal1"] [http://subject3, http://predicate4,
"literal2"@da] [http://subject1, http://predicate2, http://object3] [
http://subject1, http://predicate2, http://object2] [http://subject1,
http://predicate1, http://object1] [http://subject1, http://dc.org/#title,
"title"@da] [9ee64f4ba50df2bb8b28a1a473990da2, http://rdf.org/#rest,
http://rdf.org/#nil] [9ee64f4ba50df2bb8b28a1a473990da2,
http://rdf.org/#first, http://something/] [http://subject4,
http://dct.org/#hasPart, 9ee64f4ba50df2bb8b28a1a473990da2] [http://subject4,
http://predicate5, "literal3"^^http://type]>

When I sort the triples by the subject, they seem identical to me, sans
bnode labels:

[http://subject1, http://predicate2, http://object3]
[http://subject1, http://predicate2, http://object2]
[http://subject1, http://predicate1, http://object1]
[http://subject1, http://dc.org/#title, "title"@da]
[http://subject2, http://predicate3, "literal1"]
[http://subject3, http://predicate4, "literal2"@da]
[http://subject4, http://dct.org/#hasPart, b1]
[http://subject4, http://predicate5, "literal3"^^http://type]
[b1, http://rdf.org/#rest, http://rdf.org/#nil]
[b1, http://rdf.org/#first, http://something/]

[http://subject1, http://predicate2, http://object3]
[http://subject1, http://predicate2, http://object2]
[http://subject1, http://predicate1, http://object1]
[http://subject1, http://dc.org/#title, "title"@da]
[http://subject2, http://predicate3, "literal1"]
[http://subject3, http://predicate4, "literal2"@da]
[http://subject4, http://dct.org/#hasPart, 30b98db4d09af8c4a56f907aec5a5348]
[http://subject4, http://predicate5, "literal3"^^http://type]
[30b98db4d09af8c4a56f907aec5a5348, http://rdf.org/#rest, http://rdf.org/#nil
]
[30b98db4d09af8c4a56f907aec5a5348, http://rdf.org/#first, http://something/]

What am I missing here?


Re: [3.0.1] Models (not) isomorphic

2018-01-03 Thread Martynas Jusevičius
System.out.println("-- Expected --");
expected.write(System.out, Lang.TURTLE.getName());
System.out.println("-- Parsed --");
parsed.write(System.out, Lang.TURTLE.getName());

assertIsomorphic(parsed, parsed);
assertIsomorphic(expected, expected);
assertIsomorphic(expected, parsed);

Two first assertions pass, the third one fails. The output:

-- Expected --
<http://subject2>  <http://predicate3>  "literal1" .

<http://subject3>  <http://predicate4>  "literal2"@da .

<http://subject1>  <http://dc.org/#title>
"title"@da ;
<http://predicate1> <http://object1> ;
<http://predicate2> <http://object3> , <http://object2> .

<http://subject4>  <http://dct.org/#hasPart>
[ <http://rdf.org/#first>  <http://something/> ;
  <http://rdf.org/#rest>   <http://rdf.org/#nil>
] ;
<http://predicate5>"literal3"^^<http://type> .
-- Parsed --
<http://subject2>  <http://predicate3>  "literal1" .

<http://subject3>  <http://predicate4>  "literal2"@da .

<http://subject1>  <http://dc.org/#title>
"title"@da ;
<http://predicate1> <http://object1> ;
<http://predicate2> <http://object3> , <http://object2> .

<http://subject4>  <http://dct.org/#hasPart>
[ <http://rdf.org/#first>  <http://something/> ;
  <http://rdf.org/#rest>   <http://rdf.org/#nil>
] ;
<http://predicate5>    "literal3"^^<http://type> .

On Wed, Jan 3, 2018 at 11:22 AM, Andy Seaborne  wrote:

> Martynas,
>
> Do you have anything readable?
>
> If you write-and-read-back each model do that test isomorphic to
> themselves?  To each other after read-write?
>
> Andy
>
>
> On 03/01/18 01:05, Martynas Jusevičius wrote:
>
>> Hi,
>>
>> I am writing a unit test for an RDF parser:
>> https://github.com/AtomGraph/Core/blob/master/src/test/java/
>> com/atomgraph/core/riot/lang/RDFPostReaderTest.java
>>
>> I have constructed two Models which only differ in blank nodes,
>> yet wanted.isIsomorphicWith(got) returns false.
>>
>> Here are the wanted.toString() and got.toString() output:
>>
>> http://subject2 @http://predicate3 "literal1";
>> http://subject3
>> @http://predicate4 "literal2"@da; b1 @http://rdf.org/#rest
>> http://rdf.org/#nil; b1 @http://rdf.org/#first http://something/;
>> http://subject1 @http://predicate2 http://object3; http://subject1 @
>> http://predicate2 http://object2; http://subject1 @http://predicate1
>> http://object1; http://subject1 @http://dc.org/#title "title"@da;
>> http://subject4 @http://dct.org/#hasPart b1; http://subject4 @
>> http://predicate5 "literal3"^^http://type} |  [http://subject2,
>> http://predicate3, "literal1"] [http://subject3, http://predicate4,
>> "literal2"@da] [b1, http://rdf.org/#rest, http://rdf.org/#nil] [b1,
>> http://rdf.org/#first, http://something/] [http://subject1,
>> http://predicate2, http://object3] [http://subject1, http://predicate2,
>> http://object2] [http://subject1, http://predicate1, http://object1] [
>> http://subject1, http://dc.org/#title, "title"@da] [http://subject4,
>> http://dct.org/#hasPart, b1] [http://subject4, http://predicate5,
>> "literal3"^^http://type]>
>>
>> http://subject2 @http://predicate3 "literal1";
>> http://subject3
>> @http://predicate4 "literal2"@da; http://subject1 @http://predicate2
>> http://object3; http://subject1 @http://predicate2 http://object2;
>> http://subject1 @http://predicate1 http://object1; http://subject1 @
>> http://dc.org/#title "title"@da; 9ee64f4ba50df2bb8b28a1a473990da2 @
>> http://rdf.org/#rest http://rdf.org/#nil; 9ee64f4ba50df2bb8b28a1a473990da2
>> @
>> http://rdf.org/#first http://something/; http://subject4 @
>> http://dct.org/#hasPart 9ee64f4ba50df2bb8b28a1a473990da2; http://subject4
>> @
>> http://predicate5 "literal3"^^http://type} |  [http://subject2,
>> http://predicate3, "literal1"] [http://subject3, http://predicate4,
>> "literal2"@da] [http://subject1, http://predicate2, http://object3] [
>> http://subject1, http://predicate2, http://object2] [http://subject1,
>> http://predicate1, http://object1] [http://subject1, http://dc.org/#title
>>

Re: Jena with RDBMS or alternative graph DB

2018-01-03 Thread Martynas Jusevičius
Can't you use ontop or D2RQ to get RDF and SPARQL access?
https://github.com/ontop/ontop

On Thu, Jan 4, 2018 at 12:02 AM, Dimov, Stefan  wrote:

> Thanks Andy,
>
> I wanna clarify my questions.
>
> In case we don’t use TDB, the alternative is already chosen (whether we
> like it or not). It’s a specific platform, that has both Graph and
> Relational DB that are very fast.
>
> They both don’t have SPARQL adapters.
>
> We want to avoid implementing SPARQL layers, so my question is:
>
> In which case I would have less efforts (NOT implementing SPARQL layer),
> Relational DB adapter or Graph DB adapter, or in both cases I will need to
> implement and maintain the SPARQL layer?
>
> Regards,
> Stefan
>
>
>
> On 12/22/17, 4:41 PM, "Andy Seaborne"  wrote:
>
>
>
> On 23/12/17 00:15, Dimov, Stefan wrote:
> > Thanks Andy,
> >
> > I didn’t quite understand this:
> >
> > “1/ Use SPARQL (interface RDFConnection) to connect to a remote
> triple
> >  store.  SPARQL tripe stores are very good at implementing the
> standards.”
> >
> > Does it mean that the graph DB I wanna use, should (out-of-the-box)
> support triples and SPARQL or this is the part that I should implement in
> order to use it with Jena?
>
> There are quiet a few SPARQL triple stores.  Choose and use.
>
> RDFConnection is part of the Jena codebase.
>
> >
> > Can you, please, point me, to examples how to:
> >
> > 1. Implement SQL adapter
>
> See the code for jena-sdb. This component is "maintenance only".
>
> > 2. Implement usage of alternative graph DB
>
> Choose your SPARQL triplestore, download and use!
>
> SPARQL and it's protocols are standards. The tripe store can be any one
> that implements the standards. In the SPARQL world, compliance levels
> are high.
>
> >
> > Happy holidays!
> > Stefan
> >
> >
> > On 12/21/17, 1:46 AM, "Andy Seaborne"  wrote:
> >
> >
> >
> >  On 21/12/17 04:41, Dimov, Stefan wrote:
> >  > Hi all,
> >  >
> >  > Where can I find more detailed info on how can use Jena with:
> >  >
> >  >
> >  >1.  RDBMS
> >  >   *   Does it support just any RDBMS?
> >
> >  The supported databases are described in the documentation:
> >
> >  https://jena.apache.org/documentation/sdb/databases_
> supported.html
> >
> >  Note:
> >  [[
> >  Use of SDB for new applications is not recommended.
> >  This component is "maintenance only".
> >  TDB is faster, more scalable and better supported than SDB.
> >  ]]
> >
> >  >   *   If not, and I want to use it with different (from
> the supported) RDMS what should I do?
> >
> >  Do you need it to be an SQL database? (Someone was doing HAMA
> at one
> >  time but we didn't hear back from them).
> >
> >  You have to write the SQL adapter - SQL syntax, especially for
> DDL, is
> >  quite variable.  There are various examples to start with.
> >
> >  But SDB does not have the scale and speed of a native graph
> database.
> >
> >  >1.  Alternative (not TDB) graph DB
> >  >   *   Is it possible
> >
> >  Yes.
> >
> >  1/ Use SPARQL (interface RDFConnection) to connect to a remote
> triple
> >  store.  SPARQL tripe stores are very good at implementing the
> standards.
> >
> >  2/ See if the store of your choice offers a Jena adapter.
> >
> >   Andy
> >
> >  >   *   If not, can it easily be done?
> >  >
> >  > Regards,
> >  > Stefan
> >  >
> >
> >
>
>
>


Re: [3.0.1] Models (not) isomorphic

2018-01-03 Thread Martynas Jusevičius
Andy,

it looks like the new BaseDatetype instance is the culprit. I reproduced a
minimal case - the first test fails, the second one succeeds:

@Test
public void testIdenticalFails()
{
Model a = ModelFactory.createDefaultModel();
a.add(a.createResource(), a.createProperty("http://predicate";),
a.createTypedLiteral("literal", new BaseDatatype("http://type";)));
Model b = ModelFactory.createDefaultModel();
b.add(b.createResource(), b.createProperty("http://predicate";),
b.createTypedLiteral("literal", new BaseDatatype("http://type";)));
assertIsomorphic(a, b);
}

@Test
public void testIdenticalSucceeds()
{
Model a = ModelFactory.createDefaultModel();
a.add(a.createResource(), a.createProperty("http://predicate";),
a.createTypedLiteral("literal", XSDfloat));
Model b = ModelFactory.createDefaultModel();
b.add(b.createResource(), b.createProperty("http://predicate";),
b.createTypedLiteral("literal", XSDfloat));
assertIsomorphic(a, b);
}

Can you confirm?

On Wed, Jan 3, 2018 at 9:52 PM, Andy Seaborne  wrote:

> Can't see anything.
>
> maybe try cutting down the example to the triple(s) that matter?
> have you checked for upgrading from RDF 1.0?
>
> Andy
>
>
> On 03/01/18 17:01, Martynas Jusevičius wrote:
>
>>  System.out.println("-- Expected --");
>>  expected.write(System.out, Lang.TURTLE.getName());
>>  System.out.println("-- Parsed --");
>>  parsed.write(System.out, Lang.TURTLE.getName());
>>
>>  assertIsomorphic(parsed, parsed);
>>  assertIsomorphic(expected, expected);
>> assertIsomorphic(expected, parsed);
>>
>> Two first assertions pass, the third one fails. The output:
>>
>> -- Expected --
>> <http://subject2>  <http://predicate3>  "literal1" .
>>
>> <http://subject3>  <http://predicate4>  "literal2"@da .
>>
>> <http://subject1>  <http://dc.org/#title>
>>  "title"@da ;
>>  <http://predicate1> <http://object1> ;
>>  <http://predicate2> <http://object3> , <http://object2> .
>>
>> <http://subject4>  <http://dct.org/#hasPart>
>>  [ <http://rdf.org/#first>  <http://something/> ;
>><http://rdf.org/#rest>   <http://rdf.org/#nil>
>>  ] ;
>>  <http://predicate5>"literal3"^^<http://type> .
>> -- Parsed --
>> <http://subject2>  <http://predicate3>  "literal1" .
>>
>> <http://subject3>  <http://predicate4>  "literal2"@da .
>>
>> <http://subject1>  <http://dc.org/#title>
>>  "title"@da ;
>>  <http://predicate1> <http://object1> ;
>>  <http://predicate2> <http://object3> , <http://object2> .
>>
>> <http://subject4>  <http://dct.org/#hasPart>
>>  [ <http://rdf.org/#first>  <http://something/> ;
>><http://rdf.org/#rest>   <http://rdf.org/#nil>
>>  ] ;
>>  <http://predicate5>"literal3"^^<http://type> .
>>
>> On Wed, Jan 3, 2018 at 11:22 AM, Andy Seaborne  wrote:
>>
>> Martynas,
>>>
>>> Do you have anything readable?
>>>
>>> If you write-and-read-back each model do that test isomorphic to
>>> themselves?  To each other after read-write?
>>>
>>>  Andy
>>>
>>>
>>> On 03/01/18 01:05, Martynas Jusevičius wrote:
>>>
>>> Hi,
>>>>
>>>> I am writing a unit test for an RDF parser:
>>>> https://github.com/AtomGraph/Core/blob/master/src/test/java/
>>>> com/atomgraph/core/riot/lang/RDFPostReaderTest.java
>>>>
>>>> I have constructed two Models which only differ in blank nodes,
>>>> yet wanted.isIsomorphicWith(got) returns false.
>>>>
>>>> Here are the wanted.toString() and got.toString() output:
>>>>
>>>> http://subject2 @http://predicate3 "literal1";
>>>> http://subject3
>>>> @http://predicate4 "literal2"@da; b1 @http://rdf.org/#rest
>>>> http://rdf.org/#nil; b1 @http://rdf.org/#first http://something/;
>>>> http://subject1 @http

Re: [3.0.1] Models (not) isomorphic

2018-01-04 Thread Martynas Jusevičius
Right... I guess for the purposes of the test I'll just use a built-in
datatype.

Thanks.

On Thu, Jan 4, 2018 at 11:42 AM, Andy Seaborne  wrote:

> Martynas,
>
> Datatypes need to be registered: The easy way (and the way RIOT parsers do
> it) is to use getSafeTypeByName which registers and returns an instance of
> BaseDatatype for unknown datatypes.
>
> RDFDatatype dt =
>   TypeMapper.getInstance().getSafeTypeByName("http://type";);
> ...
> a.createTypedLiteral("literal", dt);
> ...
> b.createTypedLiteral("literal", dt)
>
> Multiple calls to getSafeTypeByName("http://type";) will return the same
> object.
>
> Andy
>
> Node n1 = NodeFactory.createLiteral(
> "literal", new BaseDatatype("http://type";));
> Node n2 = NodeFactory.createLiteral(
> "literal", new BaseDatatype("http://type";));
> System.out.println(n1.equals(n2));
> --> false
>
>
>
> On 04/01/18 00:33, Martynas Jusevičius wrote:
>
>> Andy,
>>
>> it looks like the new BaseDatetype instance is the culprit. I reproduced a
>> minimal case - the first test fails, the second one succeeds:
>>
>>  @Test
>>  public void testIdenticalFails()
>>  {
>>  Model a = ModelFactory.createDefaultModel();
>>  a.add(a.createResource(), a.createProperty("http://predicate";),
>> a.createTypedLiteral("literal", new BaseDatatype("http://type";)));
>>  Model b = ModelFactory.createDefaultModel();
>>  b.add(b.createResource(), b.createProperty("http://predicate";),
>> b.createTypedLiteral("literal", new BaseDatatype("http://type";)));
>>  assertIsomorphic(a, b);
>>  }
>>
>>  @Test
>>  public void testIdenticalSucceeds()
>>  {
>>  Model a = ModelFactory.createDefaultModel();
>>  a.add(a.createResource(), a.createProperty("http://predicate";),
>> a.createTypedLiteral("literal", XSDfloat));
>>  Model b = ModelFactory.createDefaultModel();
>>  b.add(b.createResource(), b.createProperty("http://predicate";),
>> b.createTypedLiteral("literal", XSDfloat));
>>  assertIsomorphic(a, b);
>>  }
>>
>> Can you confirm?
>>
>> On Wed, Jan 3, 2018 at 9:52 PM, Andy Seaborne  wrote:
>>
>> Can't see anything.
>>>
>>> maybe try cutting down the example to the triple(s) that matter?
>>> have you checked for upgrading from RDF 1.0?
>>>
>>>  Andy
>>>
>>>
>>> On 03/01/18 17:01, Martynas Jusevičius wrote:
>>>
>>>   System.out.println("-- Expected --");
>>>>   expected.write(System.out, Lang.TURTLE.getName());
>>>>   System.out.println("-- Parsed --");
>>>>   parsed.write(System.out, Lang.TURTLE.getName());
>>>>
>>>>   assertIsomorphic(parsed, parsed);
>>>>   assertIsomorphic(expected, expected);
>>>> assertIsomorphic(expected, parsed);
>>>>
>>>> Two first assertions pass, the third one fails. The output:
>>>>
>>>> -- Expected --
>>>> <http://subject2>  <http://predicate3>  "literal1" .
>>>>
>>>> <http://subject3>  <http://predicate4>  "literal2"@da .
>>>>
>>>> <http://subject1>  <http://dc.org/#title>
>>>>   "title"@da ;
>>>>   <http://predicate1> <http://object1> ;
>>>>   <http://predicate2> <http://object3> , <http://object2> .
>>>>
>>>> <http://subject4>  <http://dct.org/#hasPart>
>>>>   [ <http://rdf.org/#first>  <http://something/> ;
>>>> <http://rdf.org/#rest>   <http://rdf.org/#nil>
>>>>   ] ;
>>>>   <http://predicate5>    "literal3"^^<http://type> .
>>>> -- Parsed --
>>>> <http://subject2>  <http://predicate3>  "literal1" .
>>>>
>>>> <http://subject3>  <http://predicate4>  "literal2"@da .
>>>>
>>>> <http://subject1>  <http://dc.org/#title>
>>>>   "title"@da ;
>>>>   <http://predicate1>

Re: [3.0.1] Models (not) isomorphic

2018-01-04 Thread Martynas Jusevičius
Thanks, I will :)

On Thu, Jan 4, 2018 at 2:12 PM, Andy Seaborne  wrote:

> Fix RDFPostReader? L201 an L218.
>
>
> On 04/01/18 11:34, Martynas Jusevičius wrote:
>
>> Right... I guess for the purposes of the test I'll just use a built-in
>> datatype.
>>
>> Thanks.
>>
>> On Thu, Jan 4, 2018 at 11:42 AM, Andy Seaborne  wrote:
>>
>> Martynas,
>>>
>>> Datatypes need to be registered: The easy way (and the way RIOT parsers
>>> do
>>> it) is to use getSafeTypeByName which registers and returns an instance
>>> of
>>> BaseDatatype for unknown datatypes.
>>>
>>> RDFDatatype dt =
>>>TypeMapper.getInstance().getSafeTypeByName("http://type";);
>>> ...
>>> a.createTypedLiteral("literal", dt);
>>> ...
>>> b.createTypedLiteral("literal", dt)
>>>
>>> Multiple calls to getSafeTypeByName("http://type";) will return the same
>>> object.
>>>
>>>  Andy
>>>
>>> Node n1 = NodeFactory.createLiteral(
>>>  "literal", new BaseDatatype("http://type";));
>>> Node n2 = NodeFactory.createLiteral(
>>>  "literal", new BaseDatatype("http://type";));
>>> System.out.println(n1.equals(n2));
>>> --> false
>>>
>>>
>>>
>>> On 04/01/18 00:33, Martynas Jusevičius wrote:
>>>
>>> Andy,
>>>>
>>>> it looks like the new BaseDatetype instance is the culprit. I
>>>> reproduced a
>>>> minimal case - the first test fails, the second one succeeds:
>>>>
>>>>   @Test
>>>>   public void testIdenticalFails()
>>>>   {
>>>>   Model a = ModelFactory.createDefaultModel();
>>>>   a.add(a.createResource(), a.createProperty("http://predicate
>>>> "),
>>>> a.createTypedLiteral("literal", new BaseDatatype("http://type";)));
>>>>   Model b = ModelFactory.createDefaultModel();
>>>>   b.add(b.createResource(), b.createProperty("http://predicate
>>>> "),
>>>> b.createTypedLiteral("literal", new BaseDatatype("http://type";)));
>>>>   assertIsomorphic(a, b);
>>>>   }
>>>>
>>>>   @Test
>>>>   public void testIdenticalSucceeds()
>>>>   {
>>>>   Model a = ModelFactory.createDefaultModel();
>>>>   a.add(a.createResource(), a.createProperty("http://predicate
>>>> "),
>>>> a.createTypedLiteral("literal", XSDfloat));
>>>>   Model b = ModelFactory.createDefaultModel();
>>>>   b.add(b.createResource(), b.createProperty("http://predicate
>>>> "),
>>>> b.createTypedLiteral("literal", XSDfloat));
>>>>   assertIsomorphic(a, b);
>>>>   }
>>>>
>>>> Can you confirm?
>>>>
>>>> On Wed, Jan 3, 2018 at 9:52 PM, Andy Seaborne  wrote:
>>>>
>>>> Can't see anything.
>>>>
>>>>>
>>>>> maybe try cutting down the example to the triple(s) that matter?
>>>>> have you checked for upgrading from RDF 1.0?
>>>>>
>>>>>   Andy
>>>>>
>>>>>
>>>>> On 03/01/18 17:01, Martynas Jusevičius wrote:
>>>>>
>>>>>System.out.println("-- Expected --");
>>>>>
>>>>>>expected.write(System.out, Lang.TURTLE.getName());
>>>>>>System.out.println("-- Parsed --");
>>>>>>parsed.write(System.out, Lang.TURTLE.getName());
>>>>>>
>>>>>>assertIsomorphic(parsed, parsed);
>>>>>>assertIsomorphic(expected, expected);
>>>>>> assertIsomorphic(expected, parsed);
>>>>>>
>>>>>> Two first assertions pass, the third one fails. The output:
>>>>>>
>>>>>> -- Expected --
>>>>>> <http://subject2>  <http://predicate3>  "literal1" .
>>>>>>
>>>>>> <http://subject3>  <http://predicate4>  "literal2"@da .
>>>>>>
>>>>>> <http://subject1>  <http://dc.o

Re: [3.0.1] Models (not) isomorphic

2018-01-04 Thread Martynas Jusevičius
On a second thought, the isomorphism test was for the streaming version
which follows bellow in the class and uses methods from ReaderRIOTBase. The
datatypes parsing is here:
https://github.com/AtomGraph/Core/blob/master/src/main/java/com/atomgraph/core/riot/lang/RDFPostReader.java#L695

What is the fix here? Is it RDFPostReader handling the datatype wrong or
the ReaderRIOTBase?

On Thu, Jan 4, 2018 at 2:14 PM, Martynas Jusevičius 
wrote:

> Thanks, I will :)
>
> On Thu, Jan 4, 2018 at 2:12 PM, Andy Seaborne  wrote:
>
>> Fix RDFPostReader? L201 an L218.
>>
>>
>> On 04/01/18 11:34, Martynas Jusevičius wrote:
>>
>>> Right... I guess for the purposes of the test I'll just use a built-in
>>> datatype.
>>>
>>> Thanks.
>>>
>>> On Thu, Jan 4, 2018 at 11:42 AM, Andy Seaborne  wrote:
>>>
>>> Martynas,
>>>>
>>>> Datatypes need to be registered: The easy way (and the way RIOT parsers
>>>> do
>>>> it) is to use getSafeTypeByName which registers and returns an instance
>>>> of
>>>> BaseDatatype for unknown datatypes.
>>>>
>>>> RDFDatatype dt =
>>>>TypeMapper.getInstance().getSafeTypeByName("http://type";);
>>>> ...
>>>> a.createTypedLiteral("literal", dt);
>>>> ...
>>>> b.createTypedLiteral("literal", dt)
>>>>
>>>> Multiple calls to getSafeTypeByName("http://type";) will return the same
>>>> object.
>>>>
>>>>  Andy
>>>>
>>>> Node n1 = NodeFactory.createLiteral(
>>>>  "literal", new BaseDatatype("http://type";));
>>>> Node n2 = NodeFactory.createLiteral(
>>>>  "literal", new BaseDatatype("http://type";));
>>>> System.out.println(n1.equals(n2));
>>>> --> false
>>>>
>>>>
>>>>
>>>> On 04/01/18 00:33, Martynas Jusevičius wrote:
>>>>
>>>> Andy,
>>>>>
>>>>> it looks like the new BaseDatetype instance is the culprit. I
>>>>> reproduced a
>>>>> minimal case - the first test fails, the second one succeeds:
>>>>>
>>>>>   @Test
>>>>>   public void testIdenticalFails()
>>>>>   {
>>>>>   Model a = ModelFactory.createDefaultModel();
>>>>>   a.add(a.createResource(), a.createProperty("http://predicate
>>>>> "),
>>>>> a.createTypedLiteral("literal", new BaseDatatype("http://type";)));
>>>>>   Model b = ModelFactory.createDefaultModel();
>>>>>   b.add(b.createResource(), b.createProperty("http://predicate
>>>>> "),
>>>>> b.createTypedLiteral("literal", new BaseDatatype("http://type";)));
>>>>>   assertIsomorphic(a, b);
>>>>>   }
>>>>>
>>>>>   @Test
>>>>>   public void testIdenticalSucceeds()
>>>>>   {
>>>>>   Model a = ModelFactory.createDefaultModel();
>>>>>   a.add(a.createResource(), a.createProperty("http://predicate
>>>>> "),
>>>>> a.createTypedLiteral("literal", XSDfloat));
>>>>>   Model b = ModelFactory.createDefaultModel();
>>>>>   b.add(b.createResource(), b.createProperty("http://predicate
>>>>> "),
>>>>> b.createTypedLiteral("literal", XSDfloat));
>>>>>   assertIsomorphic(a, b);
>>>>>   }
>>>>>
>>>>> Can you confirm?
>>>>>
>>>>> On Wed, Jan 3, 2018 at 9:52 PM, Andy Seaborne  wrote:
>>>>>
>>>>> Can't see anything.
>>>>>
>>>>>>
>>>>>> maybe try cutting down the example to the triple(s) that matter?
>>>>>> have you checked for upgrading from RDF 1.0?
>>>>>>
>>>>>>   Andy
>>>>>>
>>>>>>
>>>>>> On 03/01/18 17:01, Martynas Jusevičius wrote:
>>>>>>
>>>>>>System.out.println("-- Expected --");
>>>>>>
>>>>>>>expected.write(System.out, Lang.TURTLE.getName());
>>>>>>>Syst

Re: [3.0.1] Models (not) isomorphic

2018-01-04 Thread Martynas Jusevičius
Sorry, my bad - the test actually creates 2 BaseDatatype instances. But if
I change it the following way, and it passes, I guess the parser is OK?

RDFDatatype datatype = new BaseDatatype("http://type";);
Model a = ModelFactory.createDefaultModel();
a.add(a.createResource(), a.createProperty("http://predicate";),
a.createTypedLiteral("literal", datatype));
Model b = ModelFactory.createDefaultModel();
b.add(b.createResource(), b.createProperty("http://predicate";),
b.createTypedLiteral("literal", datatype));
assertIsomorphic(a, b);

On Thu, Jan 4, 2018 at 2:53 PM, Andy Seaborne  wrote:

> Then trace through and see where the datatype is created.
>
> Works fine on current Jena but then various things have been improved
> since 3.0.1.
>
> Andy
>
>
> On 04/01/18 13:19, Martynas Jusevičius wrote:
>
>> On a second thought, the isomorphism test was for the streaming version
>> which follows bellow in the class and uses methods from ReaderRIOTBase.
>> The
>> datatypes parsing is here:
>> https://github.com/AtomGraph/Core/blob/master/src/main/java/
>> com/atomgraph/core/riot/lang/RDFPostReader.java#L695
>>
>> What is the fix here? Is it RDFPostReader handling the datatype wrong or
>> the ReaderRIOTBase?
>>
>> On Thu, Jan 4, 2018 at 2:14 PM, Martynas Jusevičius <
>> marty...@atomgraph.com>
>> wrote:
>>
>> Thanks, I will :)
>>>
>>> On Thu, Jan 4, 2018 at 2:12 PM, Andy Seaborne  wrote:
>>>
>>> Fix RDFPostReader? L201 an L218.
>>>>
>>>>
>>>> On 04/01/18 11:34, Martynas Jusevičius wrote:
>>>>
>>>> Right... I guess for the purposes of the test I'll just use a built-in
>>>>> datatype.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> On Thu, Jan 4, 2018 at 11:42 AM, Andy Seaborne 
>>>>> wrote:
>>>>>
>>>>> Martynas,
>>>>>
>>>>>>
>>>>>> Datatypes need to be registered: The easy way (and the way RIOT
>>>>>> parsers
>>>>>> do
>>>>>> it) is to use getSafeTypeByName which registers and returns an
>>>>>> instance
>>>>>> of
>>>>>> BaseDatatype for unknown datatypes.
>>>>>>
>>>>>> RDFDatatype dt =
>>>>>> TypeMapper.getInstance().getSafeTypeByName("http://type";);
>>>>>> ...
>>>>>> a.createTypedLiteral("literal", dt);
>>>>>> ...
>>>>>> b.createTypedLiteral("literal", dt)
>>>>>>
>>>>>> Multiple calls to getSafeTypeByName("http://type";) will return the
>>>>>> same
>>>>>> object.
>>>>>>
>>>>>>   Andy
>>>>>>
>>>>>> Node n1 = NodeFactory.createLiteral(
>>>>>>   "literal", new BaseDatatype("http://type";));
>>>>>> Node n2 = NodeFactory.createLiteral(
>>>>>>   "literal", new BaseDatatype("http://type";));
>>>>>> System.out.println(n1.equals(n2));
>>>>>> --> false
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 04/01/18 00:33, Martynas Jusevičius wrote:
>>>>>>
>>>>>> Andy,
>>>>>>
>>>>>>>
>>>>>>> it looks like the new BaseDatetype instance is the culprit. I
>>>>>>> reproduced a
>>>>>>> minimal case - the first test fails, the second one succeeds:
>>>>>>>
>>>>>>>@Test
>>>>>>>public void testIdenticalFails()
>>>>>>>{
>>>>>>>Model a = ModelFactory.createDefaultModel();
>>>>>>>a.add(a.createResource(), a.createProperty("http://predi
>>>>>>> cate
>>>>>>> "),
>>>>>>> a.createTypedLiteral("literal", new BaseDatatype("http://type";)));
>>>>>>>Model b = ModelFactory.createDefaultModel();
>>>>>>>b.add(b.createResource(), b.createProperty("http://predi
>>>>>>> cate
>>>>>>> "),
>>>>>>> b.createTypedLite

Re: [3.0.1] Models (not) isomorphic

2018-01-04 Thread Martynas Jusevičius
Sorry once again, I was getting confused.

I changed the original test method to construct Model with

RDFDatatype datatype = TypeMapper.getInstance().getSafeTypeByName("
http://type";);

and now it passes.

Thanks!

On Thu, Jan 4, 2018 at 5:46 PM, Martynas Jusevičius 
wrote:

> Sorry, my bad - the test actually creates 2 BaseDatatype instances. But if
> I change it the following way, and it passes, I guess the parser is OK?
>
> RDFDatatype datatype = new BaseDatatype("http://type";);
> Model a = ModelFactory.createDefaultModel();
> a.add(a.createResource(), a.createProperty("http://predicate";),
> a.createTypedLiteral("literal", datatype));
> Model b = ModelFactory.createDefaultModel();
> b.add(b.createResource(), b.createProperty("http://predicate";),
> b.createTypedLiteral("literal", datatype));
> assertIsomorphic(a, b);
>
> On Thu, Jan 4, 2018 at 2:53 PM, Andy Seaborne  wrote:
>
>> Then trace through and see where the datatype is created.
>>
>> Works fine on current Jena but then various things have been improved
>> since 3.0.1.
>>
>> Andy
>>
>>
>> On 04/01/18 13:19, Martynas Jusevičius wrote:
>>
>>> On a second thought, the isomorphism test was for the streaming version
>>> which follows bellow in the class and uses methods from ReaderRIOTBase.
>>> The
>>> datatypes parsing is here:
>>> https://github.com/AtomGraph/Core/blob/master/src/main/java/
>>> com/atomgraph/core/riot/lang/RDFPostReader.java#L695
>>>
>>> What is the fix here? Is it RDFPostReader handling the datatype wrong or
>>> the ReaderRIOTBase?
>>>
>>> On Thu, Jan 4, 2018 at 2:14 PM, Martynas Jusevičius <
>>> marty...@atomgraph.com>
>>> wrote:
>>>
>>> Thanks, I will :)
>>>>
>>>> On Thu, Jan 4, 2018 at 2:12 PM, Andy Seaborne  wrote:
>>>>
>>>> Fix RDFPostReader? L201 an L218.
>>>>>
>>>>>
>>>>> On 04/01/18 11:34, Martynas Jusevičius wrote:
>>>>>
>>>>> Right... I guess for the purposes of the test I'll just use a built-in
>>>>>> datatype.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> On Thu, Jan 4, 2018 at 11:42 AM, Andy Seaborne 
>>>>>> wrote:
>>>>>>
>>>>>> Martynas,
>>>>>>
>>>>>>>
>>>>>>> Datatypes need to be registered: The easy way (and the way RIOT
>>>>>>> parsers
>>>>>>> do
>>>>>>> it) is to use getSafeTypeByName which registers and returns an
>>>>>>> instance
>>>>>>> of
>>>>>>> BaseDatatype for unknown datatypes.
>>>>>>>
>>>>>>> RDFDatatype dt =
>>>>>>> TypeMapper.getInstance().getSafeTypeByName("http://type";);
>>>>>>> ...
>>>>>>> a.createTypedLiteral("literal", dt);
>>>>>>> ...
>>>>>>> b.createTypedLiteral("literal", dt)
>>>>>>>
>>>>>>> Multiple calls to getSafeTypeByName("http://type";) will return the
>>>>>>> same
>>>>>>> object.
>>>>>>>
>>>>>>>   Andy
>>>>>>>
>>>>>>> Node n1 = NodeFactory.createLiteral(
>>>>>>>   "literal", new BaseDatatype("http://type";));
>>>>>>> Node n2 = NodeFactory.createLiteral(
>>>>>>>   "literal", new BaseDatatype("http://type";));
>>>>>>> System.out.println(n1.equals(n2));
>>>>>>> --> false
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 04/01/18 00:33, Martynas Jusevičius wrote:
>>>>>>>
>>>>>>> Andy,
>>>>>>>
>>>>>>>>
>>>>>>>> it looks like the new BaseDatetype instance is the culprit. I
>>>>>>>> reproduced a
>>>>>>>> minimal case - the first test fails, the second one succeeds:
>>>>>>>>
>>>>>>>>@Test
>>>>>>>>public void testIdenticalFails()
>>

Re: RDFa ...

2018-01-13 Thread Martynas Jusevičius
Does it support RDFa 1.1?

Can it read both XHTML and HTML(5)?

On Sat, Jan 13, 2018 at 8:47 PM, Jean-Marc Vanel 
wrote:

> Yes,
> it's already possible if you use my fork from source.
> It will be easier when there will be a new release of Java-RDFa in Maven
> Central Repository.
> The current release of Java-RDFa (old) in Maven is not compatible with
> current Jena API.
> And even easier if Java-RDFa is integrated in RDFDataMgr .
>
>
> 2018-01-13 19:56 GMT+01:00 Laura Morales :
>
> > What does this mean for Jena/Fuseki? That it's possible to download a
> HTML
> > page and add RDF information to a graph?
> >
> >
> >
> >
> > Sent: Saturday, January 13, 2018 at 6:42 PM
> > From: "Jean-Marc Vanel" 
> > To: "Jena users" 
> > Subject: RDFa ...
> > Hi
> >
> > Good news!
> > I started upgrading the project java-rdfa .
> > Here is my fork:
> > https://github.com/jmvanel/java-rdfa/commits?author=jmvanel
> >
> > --
> > Jean-Marc Vanel
> > http://www.semantic-forms.cc:9111/display?displayuri=http:/
> > /jmvanel.free.fr/jmv.rdf%23me#subject[http://www.semantic-
> > forms.cc:9111/display?displayuri=http://jmvanel.
> > free.fr/jmv.rdf%23me#subject]
> >  > /jmvanel.free.fr/jmv.rdf%23me[http://www.semantic-forms.cc:
> > 9111/display?displayuri=http://jmvanel.free.fr/jmv.rdf%23me]>
> > Déductions SARL - Consulting, services, training,
> > Rule-based programming, Semantic Web
> > +33 (0)6 89 16 29 52
> > Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui
> >
>
>
>
> --
> Jean-Marc Vanel
> http://www.semantic-forms.cc:9111/display?displayuri=http:/
> /jmvanel.free.fr/jmv.rdf%23me#subject
>  /jmvanel.free.fr/jmv.rdf%23me>
> Déductions SARL - Consulting, services, training,
> Rule-based programming, Semantic Web
> +33 (0)6 89 16 29 52
> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui
>


Re: RDFa ...

2018-01-15 Thread Martynas Jusevičius
I think in principle an GRDDL XSLT stylesheet should be enough to transform
(X)HTML+RDFa to RDF/XML. Something like this:
http://ns.inria.fr/grddl/rdfa/

This would be more reusable across platforms than Jena-based code.

On Mon, Jan 15, 2018 at 12:56 PM, Jean-Marc Vanel 
wrote:

> Here is a summary of the problems with RDFa 1.0 and 1.1 in official test
> suite in
> https://github.com/rdfa/rdfa.github.io/blob/master/test-suite/test-cases/
> and
> http://rdfa.info/test-suite/test-cases/rdfa1.0/xml/0001.xml
> ( the later does not display directory content :( )
>
> https://github.com/jmvanel/java-rdfa/blob/master/TODO.md#summary
>
> Summary of the summary:
>
>- 5 problems common to RDFa 1.0 and 1.1
>- 6 unique problems in RDFa 1.1 , including a big case
>
>
> Only problems with XML wrapping are analyzed here; XHML(1 or 5), XHML(4 or
> 5), and SVG wrapping were not analyzed; they are likely to be much the
> same.
>
> My opinion is that with the recent upgrades, *java-rdfa is good enough for
> a release*.
> The current version 4.2 is unusable, because it relies on old Jena API with
> hp.hpl prefix in classes.
> The failures in tests involve subtle features in RDFa, a complex spec. ;
> the vast majority of tests pass.
>
> A release will allow RDFa to be actually used, and then the users will be
> able to tell their priorities regarding the failing tests.
> On my side, I will try it in Semantic_forms, which might reveal runtime
> problems.
>
> I add in CC the maintainer of *java-rdfa* (shellac does not seems to have
> an email available )*.*
>
>
> 2018-01-15 0:14 GMT+01:00 Jean-Marc Vanel :
>
> > I paste latest commit:
> >
> > Add all tests in http://rdfa.info/test-suite/test-cases
> >
> > - check results by RDF graph comparison
> > - add 10 new test classes
> > - old test classes should be FIXED or removed
> > - results are green in majority, but lot of red ! :(
> >
> >
> > 2018-01-14 11:58 GMT+01:00 Jean-Marc Vanel :
> >
> >>
> >>
> >> 2018-01-14 0:10 GMT+01:00 Martynas Jusevičius :
> >>
> >>> Does it support RDFa 1.1?
> >>>
> >>
> >> Couldn't get it to return a triple from the RDFa 1.1 tests.
> >> Tried with class rdfa.simpleparse,
> >> and URL's :
> >> http://rawgit.com/rdfa/rdfa.github.io/master/test-suite/test
> >> -cases/rdfa1.1/html4/0001.html
> >> http://rawgit.com/rdfa/rdfa.github.io/master/test-suite/test
> >> -cases/rdfa1.1/xhtml5/0001.xhtml
> >> https://raw.githubusercontent.com/rdfa/rdfa.github.io/master
> >> /test-suite/test-cases/rdfa1.1/xhtml5/0334.xhtml
> >>
> >> with and without
> >> --format XHTML
> >> or
> >> --format HTML
> >>
> >> The tests are in a github.io project:
> >> https://github.com/rdfa/rdfa.github.io/tree/master/test-
> suite/test-cases
> >>
> >> I prefer to use the rawgit.com service, that allows to test both HTTP
> >> and HTTPS. Note that HTTPS is not a problem for Java-RDFa .
> >>
> >> The tests have changed 7 months ago, there are 334 each for most
> >> combinations between ( 1.0, 1.1 ) and (HTML,XHTML, HTML5).
> >> Currently only 19 tests pass:
> >>
> >> Tests run: 28, Failures: 3, Errors: 6, Skipped: 0
> >>
> >> I have the impression that developments were stopped in 2016 in the
> >> middle of implementing RDFa 1.1 .
> >> If Maven experts could review the pom.xml that would help !
> >>
> >>
> >>> Can it read both XHTML and HTML(5)?
> >>>
> >>
> >> Yes , with RDFa 1.0.
> >> Tried URL's :
> >> https://rawgit.com/rdfa/rdfa.github.io/master/test-suite/tes
> >> t-cases/rdfa1.0/xhtml1/0001.xhtml
> >> https://rawgit.com/rdfa/rdfa.github.io/master/test-suite/tes
> >> t-cases/rdfa1.0/html4/0001.html
> >> which both return:
> >> <https://rawgit.com/rdfa/rdfa.github.io/master/test-suite/te
> >> st-cases/rdfa1.0/xhtml1/photo1.jpg>
> >> <http://purl.org/dc/elements/1.1/creator>
> >> "Mark Birbeck" .
> >>
> >>
> >>
> >>> On Sat, Jan 13, 2018 at 8:47 PM, Jean-Marc Vanel <
> >>> jeanmarc.va...@gmail.com>
> >>> wrote:
> >>>
> >>> > Yes,
> >>> > it's already possible if you use my fork from source.
> >>> > It will be easier when there will be a new release of Java-RDFa in
> >>> Maven
> >&g

Re: GenericRuleReasoner with limited scope

2018-01-16 Thread Martynas Jusevičius
There is also something like this:
https://www.researchgate.net/publication/307934931_SPARQL_Commands_in_Jena_Rules

On Tue, Jan 16, 2018 at 12:21 PM, Lorenz Buehmann <
buehm...@informatik.uni-leipzig.de> wrote:

> > I would like the GenericRuleReasoner to take the SPARQL query into
> account when reasoning. So, only reason about those facts that 'seem'
> relevant for answering the SPARQL query.
> Wouldn't the logical way here to execute the SPARQL query (it has to be
> a CONSTRUCT query indeed) and use the model returned here as the input
> of the reasoner?
>
>
> On 16.01.2018 10:55, Nouwt, B. (Barry) wrote:
> > Hi everyone,
> >
> > Currently I'm using the GenericRuleReasoner of Apache Jena in my project
> to apply custom rules to my RDF data. This works as expected: as soon as I
> execute the first SPARQL query in Apache Jena Fuseki, the
> GenericRuleReasoner correctly determines all the derived triples and the
> query can be answered.
> >
> > Now I am looking for the following GenericRuleReasoner (or another
> reasoner) behavior:
> >
> >   1.  I would like the GenericRuleReasoner to take the SPARQL query into
> account when reasoning. So, only reason about those facts that 'seem'
> relevant for answering the SPARQL query.
> >   2.  I would like the GenericRuleReasoner to remove the derived triples
> after the SPARQL query has been answered. So, the next time it receives the
> same SPARQL query, it will not be able to reuse the previous reasoning
> result.
> >
> > Note:
> >
> >   *   I am willing to accept some incomplete answers to get the
> functionality described in 1.
> >   *   In literature I found the term 'query answering' as one of the
> things a reasoner can do, which looks like the behavior I describe in point
> 1 above. It is also mentioned here: http://ontogenesis.
> knowledgeblog.org/1486
> >
> > Does anyone know whether the behavior described above is possible (does
> Apache Jena allow the reasoner to use the SPARQL query as input at all) and
> whether there are rule engine implementations for Jena available that
> implement such behavior? Is it possible to configure Jena in such a way
> that it restarts the reasoner for every query it receives?
> >
> > Any other pointers that help me understand this reasoner related subject
> are welcome as well.
> >
> > Regards,
> >
> > Barry
> > This message may contain information that is not intended for you. If
> you are not the addressee or if this message was sent to you by mistake,
> you are requested to inform the sender and delete the message. TNO accepts
> no liability for the content of this e-mail, for the manner in which you
> use it and for damage of any kind resulting from the risks inherent to the
> electronic transmission of messages.
> >
>
>


Re: validate an RDF/XML file with bad URL's

2018-01-20 Thread Martynas Jusevičius
I agree that fixing at the source is way to go.

Checking (but not fixing) an URI in XSLT 2.0 could be as simple as

@rdf:about castable as xs:anyURI
On Sat, 20 Jan 2018 at 10.28, Conal Tuohy  wrote:

> On 20 January 2018 at 18:37, Jean-Marc Vanel 
> wrote:
>
> > 2018-01-20 0:15 GMT+01:00 Andy Seaborne :
> >
> > > Hi,
> > >
> > > Minimal, example file?
> > >
> >
> > ?xml version="1.0" encoding="UTF-8"?>
> > http://xmlns.com/foaf/0.1/"; >
> >  > rdf:about="
> > https://www.communecter.org/#organization.detail.id.
> > 5898612440bb4e7d28cfc81a"
> > >
> >   http://anais-ponta...@googlegroups.com
> > *"/>
> >   
> > 
> >
>
>
> >
> > > Passing the input through a text processing stage (perl, sed ...) is
> > > probably the better way - fix up the errors.
> > >
> >
> > Sure, but I'm at the end of data flow: a crowd sourcing site gathers
> > (variable) quality data, then a developer converts several such sites in
> a
> > unique XML format, then me applying XSLT for RDF. So upstream it's
> curated,
> > and I report everything I find . And bad IRI's do not prevent the RDF to
> be
> > loaded in TDB .
> >
> >
> If you are generating the RDF/XML using XSLT, may I suggest you try
> to clean up the URIs in the XSLT? If you are using XSLT version 2 or newer,
> then you can even use xsl:analyze-string to check URIs with a regex, but
> even in XSLT 1 it should not be hard. Then you can repair (or log) errors
> like the one in your example, as well as ensuring that host names are in
> lower case, characters are correctly URI-encoded, etc.
>
> For example, here's an XSLT template I've used to repair incorrect
> URI-encoding in some URIs prior to ingestion as RDF:
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>
> I hope that's helpful!
>
> --
> Conal Tuohy
> http://conaltuohy.com/
> @conal_tuohy
> +61-466-324297
>


Re: Inconsistency in Ontologies

2018-01-22 Thread Martynas Jusevičius
You should probably ask on
https://sourceforge.net/p/dbpedia/mailman/dbpedia-ontology/

On Mon, Jan 22, 2018 at 4:16 PM, javed khan  wrote:

> Hello
>
> What are the possible examples (chances) of inconsistencies in ontologies?
> Like one I know is if we take same instance of classes that are disjoint.
>
> Also what (chances of) inconsistencies arises when data is extracted from
> Wikipedia infoboxes via Dbpedia?
>
> Thank you
>
>  source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
> Virus-free.
> www.avast.com
>  source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Re: Extending Sparql with New Keywords

2018-02-05 Thread Martynas Jusevičius
Berkin,

unless this is some kind of exercise, is this really a good idea? What
would be the adoption rate of your new language? All SPARQL 1.1 processors
will report errors in your extended syntax.

On Mon, Feb 5, 2018 at 9:42 PM, Berkin Özdemir Bengisu <
berkinozdemirbeng...@gmail.com> wrote:

> Hello,
>
> I am using jena 3.6.0 and I would like to extend the sparql 1.1 grammar
> with some new keywords. Could you please help me what would be the best way
> to start with?
>
> - As I understood, the grammar is generated by javaCC but I was unable to
> find any documentation or any main classes that actually generate the
> grammar. Are grammar classes generated as an out source and then added to
> the project?
>
> I would be really glad if you could guide me on this.
>
> Best
> Berkin
>


Re: Updating triples that originate from a specific named file.

2018-02-09 Thread Martynas Jusevičius
I think what you want is to import each RDF document into a named graph and
then do Graph Store Protocol PUT with the graph URI:
https://www.w3.org/TR/sparql11-http-rdf-update/#http-put

On Fri, Feb 9, 2018 at 8:31 PM, Jeffrey C. Witt 
wrote:

> I have a question about updating triples in TDB via Fueseki.
>
> I have a work flow where I automatically build tripes through an
> aggregator.
>
> The results of this aggregation get stored in various named static files.
>
> I then use then the "./s-post" command to add each of these files to TDB.
>
> The static files get changed at different times depending on the rate of
> updates further on upstream, and thus I'm wondering if there is a way to
> update TDB by replacing only the triples that were ingested from a
> particular file with the triples from the updated file.
>
> Perhaps an example will help.
>
> Suppose I have files A.rdf, B.rdf, and C.rdf.
>
> On the initial build, I load all three files into TDB as follows:
>
> ./bin/s-post http://localhost:3030/ds/data default A.rdf
> ./bin/s-post http://localhost:3030/ds/data default B.rdf
> ./bin/s-post http://localhost:3030/ds/data default C.rdf
>
> Now suppose that C.rdf changes. But A.rdf and B.rdf remain the same.
>
> At the present, I'm creating an entirely new build by reloading all three
> files in the same manner.
>
> But as the number of files increases, each re-build takes a while.
>
> I would prefer to be able to remove only the triples that were originally
> added to the database from file C.rdf with the triples from the new,
> updated, C.rdf file, leaving the triples that originated from A.rdf and
> B.rdf untouched.
>
> using ./s-post on the updated C.rdf file just adds the new triples
> alongside the old ones
> and
> using ./s-update replaces the entire graph (including the triples from
> files A and B) with the triples from the updated C.rdf file.
>
> I'm not sure if this is possible, but I wanted to check before I continued
> my practice or consider other solutions.
>
> Many thanks for any assistance you can offer.
> Jeff
>
> --
> Dr. Jeffrey C. Witt
> Philosophy Department
> Loyola University Maryland
> 4501 N. Charles St.
> Baltimore, MD 21210
> www.jeffreycwitt.com
>


Re: Updating triples that originate from a specific named file.

2018-02-09 Thread Martynas Jusevičius
Which is s-put in Jena:
https://jena.apache.org/documentation/fuseki2/soh.html#soh-sparql-http

On Fri, Feb 9, 2018 at 10:36 PM, Martynas Jusevičius  wrote:

> I think what you want is to import each RDF document into a named graph
> and then do Graph Store Protocol PUT with the graph URI:
> https://www.w3.org/TR/sparql11-http-rdf-update/#http-put
>
> On Fri, Feb 9, 2018 at 8:31 PM, Jeffrey C. Witt 
> wrote:
>
>> I have a question about updating triples in TDB via Fueseki.
>>
>> I have a work flow where I automatically build tripes through an
>> aggregator.
>>
>> The results of this aggregation get stored in various named static files.
>>
>> I then use then the "./s-post" command to add each of these files to TDB.
>>
>> The static files get changed at different times depending on the rate of
>> updates further on upstream, and thus I'm wondering if there is a way to
>> update TDB by replacing only the triples that were ingested from a
>> particular file with the triples from the updated file.
>>
>> Perhaps an example will help.
>>
>> Suppose I have files A.rdf, B.rdf, and C.rdf.
>>
>> On the initial build, I load all three files into TDB as follows:
>>
>> ./bin/s-post http://localhost:3030/ds/data default A.rdf
>> ./bin/s-post http://localhost:3030/ds/data default B.rdf
>> ./bin/s-post http://localhost:3030/ds/data default C.rdf
>>
>> Now suppose that C.rdf changes. But A.rdf and B.rdf remain the same.
>>
>> At the present, I'm creating an entirely new build by reloading all three
>> files in the same manner.
>>
>> But as the number of files increases, each re-build takes a while.
>>
>> I would prefer to be able to remove only the triples that were originally
>> added to the database from file C.rdf with the triples from the new,
>> updated, C.rdf file, leaving the triples that originated from A.rdf and
>> B.rdf untouched.
>>
>> using ./s-post on the updated C.rdf file just adds the new triples
>> alongside the old ones
>> and
>> using ./s-update replaces the entire graph (including the triples from
>> files A and B) with the triples from the updated C.rdf file.
>>
>> I'm not sure if this is possible, but I wanted to check before I continued
>> my practice or consider other solutions.
>>
>> Many thanks for any assistance you can offer.
>> Jeff
>>
>> --
>> Dr. Jeffrey C. Witt
>> Philosophy Department
>> Loyola University Maryland
>> 4501 N. Charles St.
>> Baltimore, MD 21210
>> www.jeffreycwitt.com
>>
>
>


Re: Vocabulary for provenance

2018-02-22 Thread Martynas Jusevičius
Since you mention "provenance", PROV ontology would be one option:
https://www.w3.org/TR/prov-o/

But your usage looks more like VoID, more specifically void:inDataset:
https://www.w3.org/TR/void/#backlinks

PROV and VoID can be combined of course.

On Thu, Feb 22, 2018 at 11:35 AM, Laura Morales  wrote:

> Which vocabulary is a good choice to describe provenance or to describe
> graphs? I'd like to use something like this
>
>   "Graph-1"
>   
>
> or like this
>
>   [
>  "Graph-1" ;
>   
> ]
>
> there are so many vocabularies out there that I don't even know where to
> start looking at. Is there anything available for this in schema.org?
> Otherwise I'd like to know if there exist any vocabulary that is used by
> others, and more importantly which is not abandoned or dead.
>
> Thanks.
>


ResourceImpl subclass constructors with extra arguments

2018-02-25 Thread Martynas Jusevičius
Hi,

we are extensively using Jena's polymorhism, by implementing classes that
extend ResourceImpl, with such constructors:

public SomeImpl(Node n, EnhGraph g)
{
super(n, g);
}

This works nicely for polymorphic purposes, but it also means that we
cannot pass extra arguments that we would normally want in a constructor,
for example:

public SomeImpl(Node n, EnhGraph g, Client client)
{
super(n, g);
this.client = client;
}

Well we could of course add arguments to the constructor, but then in the
Implementation.wrap(), we wouldn't know where to get their values from.

To work around this, we've implemented setters such as
SomeImpl.setClient(Client)
instead. But it's not optimal, because this is the only place where we need
setters in an otherwise completely immutable code.

I hope I managed to explain my use case. Is there a way to achieve this?
Examples would be great.
Maybe I need to subclass Implementation to provide values to the
constructors?


Martynas


Re: Use PREFIXes by default

2018-02-27 Thread Martynas Jusevičius
I don't think sending results without prefixes is possible, as it would
make them invalid in respect to their format. Sounds like a really bad idea
TBH.

On Tue, Feb 27, 2018 at 12:48 PM, Laura Morales  wrote:

> Do these libraries also add PREFIXes for the output? For example I send a
> query, get a XML or JSON-LD back, and the library automatically applies
> known prefixes to the JSON-LD?
>
>
>
>
> Sent: Tuesday, February 27, 2018 at 12:08 PM
> From: "Osma Suominen" 
> To: users@jena.apache.org
> Subject: Re: Use PREFIXes by default
> ajs6f kirjoitti 25.02.2018 klo 23:46:
> > If you are not concerned about performance, why not add those prefixes
> client-side?
>
> Some client-side libraries do this.
>
> With EasyRdf (for PHP), you declare your prefixes once, and the library
> knows about the common ones so you don't have to declare RDFS, OWL etc.
> Then you write your SPARQL queries using qnames and EasyRdf prepends
> PREFIX declarations transparently at query time (and only the ones you
> actually used in the particular query).
>
> YASGUI is a bit similar, when you use an unknown prefix it will try to
> look it up from prefix.cc and add the PREFIX declaration if it finds one.
>
> It's too bad SPARQLWrapper (for Python) doesn't do this, there you have
> to declare PREFIXes for each query. Of course it's fairly trivial to
> make a wrapper that takes care of adding prefixes to a query.
>
> Like others in this thread have argued, I think it makes sense to do
> prefix handling on the client side and keep the SPARQL protocol "simple
> but stupid". Then everything needed to answer a query (well, except for
> the RDF data set of course) will be contained in the HTTP request.
>
> -Osma
>
>
> --
> Osma Suominen
> D.Sc. (Tech), Information Systems Specialist
> National Library of Finland
> P.O. Box 26 (Kaikukatu 4)
> 00014 HELSINGIN YLIOPISTO
> Tel. +358 50 3199529
> osma.suomi...@helsinki.fi
> http://www.nationallibrary.fi
>


Re: AW: URI of anonymous classes

2018-02-27 Thread Martynas Jusevičius
Jos, this reminds me of a similar issue I had:
https://www.mail-archive.com/users@jena.apache.org/msg06279.html

Ideally you should have explicit rdfs:isDefinedBy statements for each term,
linking them to their defining ontology.

On Tue, Feb 27, 2018 at 7:59 PM, Jos Lehmann <
jos.lehm...@bauhaus-luftfahrt.net> wrote:

> Hi Lorenz
>
> If you could be more specific it would help.
>
> The restrictions I am working with don't seem to have a URI, so I can't
> tell if, in the input ontologies, they belong to the imported ontology or
> to the importing ontology. I.e.:
>
> WHEN DOING:
>
> ExtendedIterator properties = thisClass.listProperties();
>   while (properties.hasNext()) {
> Statement thisProperty =
> (Statement) properties.next();
> if (!(((Character.isDigit(
> thisProperty.getObject().toString().charAt(0 ||
> thisProperty.getObject().toString().startsWith("-"))) {
> 
> newClass.addProperty(thisProperty.getPredicate(),
> thisProperty.getObject());
> }
> else
> {System.out.println("Anonymous
> class: " + thisProperty.getObject().toString());
> System.out.println("Anonymous
> class' URI:  "  + ((Resource) thisProperty.getObject()).getURI());
>
>
>
> I GET, FOR INSTANCE:
>
> Anonymous class: 1a26f878:161d895229e:-7e84
> Anonymous class' URI:  null
>
>
> How else can I programmatically  determine whether a restriction comes
> from the imported or from the importing ontology
> Cheers, Jos
>
>
>
> -Ursprüngliche Nachricht-
> Von: Lorenz Buehmann [mailto:buehm...@informatik.uni-leipzig.de]
> Gesendet: Dienstag, 27. Februar 2018 12:29
> An: users@jena.apache.org
> Betreff: Re: AW: URI of anonymous classes
>
> You can get access to imported models via
>
> OntModel::getImportedModel(String uri)
>
> or
>
> OntModel::listSubModels(boolean withImports)
>
>
> Javadoc[1] is always your friend.
>
>
> [1]
> https://jena.apache.org/documentation/javadoc/jena/
> org/apache/jena/ontology/OntModel.html
>
>
> On 27.02.2018 11:48, Jos Lehmann wrote:
> > As an integration to my question (see Ursprüngliche Nachricht) please
> see further down the ouput of re-writing the importing ontology*.
> > The same intersection is repeated. Yet, Protege is able to tell that one
> of the two intesections comes from the imported ontology the other from the
> importing ontology. How? How to tell the the same thing in jena.
> >
> > Jos
> >
> > -Ursprüngliche Nachricht-
> > Von: Jos Lehmann
> > Gesendet: Dienstag, 27. Februar 2018 11:09
> > An: users@jena.apache.org
> > Betreff: URI of anonymous classes
> >
> > Hi there
> >
> > I am rewriting an ontology using jena. I rewrite each import. When
> rewriting an importing ontology, anonymous classes (e.g. restrictions or
> disjointness axioms) of the imported ontology appear twice in the importing
> ontology, because to re-write the importing ontology I am working with
> OntModelSpec.OWL_DL_MEM, to make sure rewrite instances asserted in the
> importing ontology of classes declared in the imported ontology.
> >
> > Question: is there a way of telling in jena where (at which URI) was a
> restriction or an axiom asserted?
> >
> >Note: Protege has a tooltip saying "Asserted in: URI" of such
> classes. But the rdf/xml file does not seem to have such a property.
> >
> >   Subquestion: Does protege compute this property by looking at all
> URI of all statements oft he anonymous classes?
> >
> >
> > Hope I am making sense. Jos
> >
> >
> >
> >
> > * OUTPUT OF RE-WRITING IMPORTING ONTOLOGY
> >
> > http://purl.oclc.org/NET/ssnx/qu/qu#Dimension";>
> > 
> >   
> > 
> >   
> > 
> >   http://purl.oclc.org/NET/ssnx/qu/qu#
> DimensionFactor"/>
> > 
> > 
> >   http://purl.oclc.
> org/NET/ssnx/qu/qu#dimensionFactor">
> > 
> >   http://purl.oclc.
> org/NET/ssnx/qu/qu#DimensionFactor"/>
> > 
> > skos:exactMatch 'factor' [SysML 1.2-QUDV]
> http://www.omgwiki.org/OMGSysML/doku.php?id=sysml-qudv:quantities_units_
> dimensions_values_qudv
> > Rational number that specifies the factor
> in the dimension conversion relationship.
> > http://purl.oclc.org/NET/ssnx/qu
> 
> > dimension factor
> > http://www.omgsysml.org/qudv#Dimension rdfs:seeAlso>
> > http://www.w3.
> org/2002/07/owl#ObjectProperty"/>
> >   
> > 
> >   
> >   
> > http://www.w3.
> org/2001/XMLSchema#string"/>
> > http://www.w3.
> org/2001/XMLSchema#int"
> > >1
> > 
> >   http://purl.oclc.
> org/NET/ssnx/qu/

Re: Fuseki errors with concurrent requests

2018-03-06 Thread Martynas Jusevičius
Maybe you can make a reproducible using JMeter or such.

On Tue, Mar 6, 2018 at 11:24 AM, Mikael Pesonen 
wrote:

>
> Yes, clean install of Ubuntu, Jena etc.
>
>
>
>
> On 5.3.2018 17:40, Andy Seaborne wrote:
>
>>
>>
>> On 05/03/18 15:04, Mikael Pesonen wrote:
>>
>>>
>>> We are using GSP and our test script is doing ~20 json-ld inserts and
>>> sparql updates in a row ASAP, and we are running 10 test scripts
>>> concurrently. This test is failing now.
>>>
>>
>> Starting with an empty database?
>>
>>
>>>
>>> On 5.3.2018 16:51, ajs6f wrote:
>>>
 "fairly high load and concurrent usage"

 This is not a very precise or reproducible measure.

 Many sites use Jena in production at all kinds of scales for all kinds
 of dimensions, including HA setups. If you can explain more about your
 specific situation, you will get more useful advice.

 ajs6f

 On Mar 5, 2018, at 9:45 AM, Mikael Pesonen 
> wrote:
>
>
> To be clear: can Jena be recommended for production database in our
> customer cases for fairly high load and concurrent usage? Or is it mainly
> for scientific purposes?
>
> Br
>
> On 5.3.2018 16:41, ajs6f wrote:
>
>> To my knowledge (Andy of course is the TDB expert) you can't really
>> rebuild a TDB instance from a corrupted TDB instance. You should start 
>> with
>> a known-good backup or original RDF files.
>>
>> ajs6f
>>
>> On Mar 5, 2018, at 9:32 AM, Mikael Pesonen <
>>> mikael.peso...@lingsoft.fi> wrote:
>>>
>>>
>>> Still having these issues on all of our installations.
>>>
>>> I'm going to rule out corrupted database on our oldest server. What
>>> would be preferred way to rebuild data?
>>>
>>> Data folder:
>>>
>>>   5226102784 Mar  5 12:48 GOSP.dat
>>>260046848 Mar  5 12:48 GOSP.idn
>>>   5377097728 Mar  5 12:48 GPOS.dat
>>>268435456 Mar  5 12:48 GPOS.idn
>>>   5486149632 Mar  5 12:48 GSPO.dat
>>>285212672 Mar  5 12:48 GSPO.idn
>>>0 Mar  5 12:48 journal.jrnl
>>>545259520 Mar  5 12:38 node2id.dat
>>>150994944 Feb 20 16:32 node2id.idn
>>>497658012 Mar  5 12:38 nodes.dat
>>>1 Nov 14 15:27 none.opt
>>> 33554432 Jan 24 17:06 OSP.dat
>>>   4848615424 Mar  5 12:48 OSPG.dat
>>>293601280 Mar  1 12:46 OSPG.idn
>>>  8388608 Jan 24 16:59 OSP.idn
>>> 25165824 Jan 24 17:06 POS.dat
>>>   4966055936 Mar  5 12:48 POSG.dat
>>>276824064 Mar  5 12:38 POSG.idn
>>>  8388608 Jan 24 16:55 POS.idn
>>>  8388608 Jan 31 12:06 prefix2id.dat
>>>  8388608 Mar 15  2016 prefix2id.idn
>>> 6771 Jan 31 12:06 prefixes.dat
>>> 25165824 Jan 31 12:06 prefixIdx.dat
>>>  8388608 Jan  8 13:19 prefixIdx.idn
>>> 33554432 Jan 24 17:06 SPO.dat
>>>   5075107840 Mar  5 12:48 SPOG.dat
>>>369098752 Mar  5 12:48 SPOG.idn
>>>  8388608 Jan 24 17:04 SPO.idn
>>> 4069 Nov  7 16:38 _stats.opt
>>>4 Feb  6 12:01 tdb.lock
>>>
>>> On 30.1.2018 15:04, Andy Seaborne wrote:
>>>
 These seem to be different errors.

 "In the middle of an alloc-write" is possibly a concurrency issue.
 "Failed to read" is possibly a previous corrupted database

 This is a text dataset? That should be using an MRSW lock to get
 some level isolation.

 What's the Fuseki config in this case?

  Andy

 On 24/01/18 23:40, Chris Tomlinson wrote:

> On the latest 3.7.0-Snapshot (master branch) I also saw repeated
> occurrences of this the other day while running some queries from the
> fuseki browser app and with a database load going on with our own app 
> using:
>
> DatasetAccessorFactory.createHTTP(baseUrl+"/data”);
>
>
> with for the first model to transfer:
>
>   DatasetAccessor putModel(graphName, m);
>
> and for following models:
>
>   static void addToTransferBulk(final String graphName, final
> Model m) {
>   if (currentDataset == null)
>   currentDataset = DatasetFactory.createGeneral();
>   currentDataset.addNamedModel(graphName, m);
>   triplesInDataset += m.size();
>   if (triplesInDataset > initialLoadBulkSize) {
>   try {
>   loadDatasetMutex(currentDataset);
>   currentDataset = null;
>   triplesInDataset = 0;
>   } catch (TimeoutException e) {
>   e.printStackTrace();
>   return;
>   }
>   }
>   }
>
> as I say the exc

Re: Get the restrictions imposed on a class to the individuals of that type

2018-03-07 Thread Martynas Jusevičius
First of all, your class example is not valid OWL and not even RDF.

I think you can model this using owl:hasValue restrictions:
https://www.w3.org/TR/owl-ref/#hasValue-def

On Wed, Mar 7, 2018 at 10:28 AM, Amar Banerjee 
wrote:

> Hi folks !!
>
> I have been trying to create a sparql query to get the restrictions imposed
> on a class for the individuals of that class.
> However, I am not sure what exactly has to be done.
>
> I have a :
>
> Class SmallCan:
>   subClass Of : radius value 0.1
>   subClassOf : height value 1
>
> Now for this class, I have an individual -
>
> Coke-Can :
>  type: SmallCan
>
> I want a query which could automatically tell me that the Coke-Can has
> radius 0.1 and height 1
>
> Something like this  :-
>
> SELECT ?property ?value WHERE {
> Coke-Can ...
> }
>
> and I get a result like :-
> --
> Property  |   Value
> ---
> radius  | 0.3
> height  |  1
> 
>
> Looking for your help.
>
> Cheers.
>


Re: Streaming CONSTRUCT/INSERTs in TDB

2018-03-11 Thread Martynas Jusevičius
OOME = Out of memory exception

On Sun, Mar 11, 2018 at 10:27 PM, Adrian Gschwend 
wrote:

> On 10.03.18 00:36, Andy Seaborne wrote:
>
> Hi Andy,
>
> > Executes in 2m 20s (java8) for me (and 1m 49s with java9 which is .
> > Default heap which is IIRC 25% of RAM or 8G. Cold JVM, cold file cache.
>
> wow, did you do that with TDB commandline tools? Default heap in terms
> of default settings of Fuseki?
>
> > If you have an 8G machine, an 8G heap may cause problems (swapping).
>
> I have 16 gigs on my local system.
>
> > Does the CPU load go up very high, on all cores? That's a sign of a full
> > GC trying to reclaim space before a OOME.
>
> yes that's exactly what is happening. What is OOME?
>
> > If you get the same with TDB2, then the space isn't going in
> > transactions in TDB1.
>
> not sure what that means but ok :)
>
> regards
>
> Adrian
>


[3.0.1] ResultSetFactory.fromJSON() won't parse ASK JSON result

2018-03-11 Thread Martynas Jusevičius
Hi,

I'm getting the following JSON result from an ASK query:

  { "head": {}, "boolean": true }

However, the method that usually works fine, will not parse it from
InputStream (Jena 3.0.1):

org.apache.jena.sparql.resultset.ResultSetException: Not a ResultSet
result
org.apache.jena.sparql.resultset.SPARQLResult.getResultSet(SPARQLResult.java:94)
org.apache.jena.sparql.resultset.JSONInput.fromJSON(JSONInput.java:64)
org.apache.jena.query.ResultSetFactory.fromJSON(ResultSetFactory.java:331)

I stepped inside the code and I see that JSONObject is parsed fine, but
afterwards SPARQLResult.resultSet field is not being set for some reason.

Any ideas?


Martynas


Re: [3.0.1] ResultSetFactory.fromJSON() won't parse ASK JSON result

2018-03-12 Thread Martynas Jusevičius
Hi Andy,

I'm not using QueryExecution here, I'm trying to parse JSON read from HTTP
InputStream using ResultSetFactory.fromJSON().

Then I want to carry the result set, maybe do some logic based on it, and
possibly serialize it back using ResultSetFormatter.

Is that not possible with ASK result?

On Mon, Mar 12, 2018 at 9:46 AM, Andy Seaborne  wrote:

>
>
> On 11/03/18 23:03, Martynas Jusevičius wrote:
>
>> Hi,
>>
>> I'm getting the following JSON result from an ASK query:
>>
>>{ "head": {}, "boolean": true }
>>
>> However, the method that usually works fine, will not parse it from
>> InputStream (Jena 3.0.1):
>>
>>  org.apache.jena.sparql.resultset.ResultSetException: Not a ResultSet
>> result
>> org.apache.jena.sparql.resultset.SPARQLResult.getResultSet(
>> SPARQLResult.java:94)
>> org.apache.jena.sparql.resultset.JSONInput.fromJSON(JSONInput.java:64)
>> org.apache.jena.query.ResultSetFactory.fromJSON(ResultSetFac
>> tory.java:331)
>>
>> I stepped inside the code and I see that JSONObject is parsed fine, but
>> afterwards SPARQLResult.resultSet field is not being set for some reason.
>>
>> Any ideas?
>>
>
> The outcome of an ASK query is a boolean, not a ResultSet.
>
> See execAsk.
>
> SPARQLResult is the class for a holder of any SPARQL result type.
>
> Andy
>
>
>>
>> Martynas
>>
>>


Re: FILTER (CONTAINS on a graph name : should order matter ?

2018-03-12 Thread Martynas Jusevičius
?thing is undefined within GRAPH in your second query.

On Mon, Mar 12, 2018 at 3:59 PM, Jean-Marc Vanel 
wrote:

> Hi !
>
> This works as expected:
>
> SELECT DISTINCT ?thing
>   WHERE {
>graph ?thing {
>  [] ?p ?O .
>}
>FILTER (CONTAINS( str(?thing),"cartopair"))
>  }
>
> but this gives an empty result :
>
> SELECT DISTINCT ?thing
> WHERE {
>  graph ?thing {
>[] ?p ?O .
>FILTER (CONTAINS( str(?thing),"cartopair"))
>  }
> }
>
>
> --
> Jean-Marc Vanel
> http://www.semantic-forms.cc:9111/display?displayuri=http:/
> /jmvanel.free.fr/jmv.rdf%23me#subject
>  /jmvanel.free.fr/jmv.rdf%23me>
> Déductions SARL - Consulting, services, training,
> Rule-based programming, Semantic Web
> +33 (0)6 89 16 29 52
> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui
>


Re: Getting Symmetric Concise Bounded Description with Fuseki

2018-03-12 Thread Martynas Jusevičius
I disagree about SCBD as the default. In a Linked Data context, DESCRIBE is
usually used to return description of a resource, meaning the resource is
in the subject position. And then bnode closure is added, because otherwise
there would be no way to reach those bnodes. It's not about exploring the
graph in all directions.

If you want more specific description, then you can always use CONSTRUCT.

Some triplestores, for example Dydra, allow specification of the
description algorithm using a special PREFIX scheme, such as

PREFIX describeForm: 

On Mon, Mar 12, 2018 at 4:40 PM, Reto Gmür  wrote:

> Hi Andy
>
> > -Original Message-
> > From: Andy Seaborne 
> > Sent: Saturday, March 10, 2018 3:47 PM
> > To: users@jena.apache.org
> > Subject: Re: Getting Symmetric Concise Bounded Description with Fuseki
> >
> > Hi Reto,
> >
> > The whole DescribeHandler system is very(, very) old and hasn't changed
> in
> > ages, other than maintenance.
> >
> > On 10/03/18 11:44, Reto Gmür wrote:
> > > Hi Andy,
> > >
> > > It first didn't quite work as I wanted it to: the model of the
> resource passed
> > to the describe model is the default graph so I got only the triples in
> that
> > graph.  Setting "tdb:unionDefaultGraph true" didn't change the graph the
> > DescribeHandler gets.
> >
> > tdb:unionDefaultGraph only affects SPARQL execution.
> >
> > > Looking at the default implementation I saw that the Dataset can be
> > accessed from the context passed to the start method with
> > cxt.get(ARQConstants.sysCurrentDataset). I am now using the Model
> returned
> > by dataset. getUnionModel.
> >
> > That should work.  Generally available getUnionModel post-dates the
> describe
> > handler code.
> >
> > > I'm wondering why the DescribeBNodeClosure doesn't do the same but
> > instead queries for all graphs that contain the resource and then works
> on
> > each of the NamedModel individually. Is the UnionModel returned by the
> > dataset inefficient that you've chosen this approach?
> >
> > I don't think so - much the same work is done, just in different places.
> >
> > getUnionModel will work with blank node named graphs.
> >
> > getUnionModel will do describes spanning graphs, iterating over named
> > graphs will not.
> >
> > > Also the code seems to assume that the name of the graph is a URI, does
> > Jena not support Blank Nodes as names for graphs (having an "anonymous
> > node" as name might be surprising but foreseen in RDF datasets)?
> >
> > Again, old code (pre RDF 1.1, which is where bNode graph names came in).
> >
> > Properly, nowadays, it should all work on DatasetGraph whose API does
> work
> > with bNode graphs.  Again, history.
> >
> > If you want to clean up, please do so.
> >
> > > It seems that even when a DescribeHandler is provided, the default
> handler
> > is executed as well. Is there a way to disable this?
> >
> > IIRC all the handers are executed - the idea being to apply all policies
> and
> > handlers may only be able to describe certain classes.  Remove any not
> > required, or set your own registry in the query (a bit tricky in Fuseki).
> >
> > > Another question is about the concept of "BNode closure", what's the
> > rationale for expanding only forward properties? Shouldn't a closure be
> > everything that defines the node?
> >
> > It is a simple, basic policy - the idea being that more appropriate ones
> which
> > are data-sensitive would be used. This basic one can go wrong (FOAF
> graphs
> > when people are bnodes) and does not handle IFP; it does cover blank
> nodes
> > used for values with structure and for RDF lists.
> >
> > The point about DESCRIBE is that the "right" answer is not a fixed data-
> > independent algorithm but is best for the data being published.
>
> I realize that. My question was more about the definition of "closure".
> Following forward properties might be a pragmatic approach, the data can
> often be modelled in such a way that this default implementation of
> DESCRIBE returns very useful results.
>
> But, in some cases even forward properties only, might result in a too
> comprehensive response. So if the current system doesn't allow disabling
> the default handler one cannot make this answer smaller (e.g. return a
> description of instances of ex:Organization without all its ex:hasMember
> properties). I think fuseki should both allow returning results that
> contain more as well as less than the default.
>
> As for the best default I think SCBD is the best because independently of
> the data being published and ontologies being used it returns everything
> the server knows about a particular resource, only stopping the contextual
> description where the client can get more information with another DESCRIBE
> query. With SCBD a connected graph can be fully explored with DESCRIBE
> starting at any resource. Yes the response might be to comprehensive and so
> there needs to be a mechanism for DESCRIBE handlers to allow responses that
> are smaller than the de

Re: parse one quad?

2018-03-12 Thread Martynas Jusevičius
Maybe this?
https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/riot/RDFDataMgr.html#read-org.apache.jena.query.Dataset-java.io.InputStream-org.apache.jena.riot.Lang-

On Mon, Mar 12, 2018 at 8:46 PM, ajs6f  wrote:

> I've got a use case for parsing one quad (in NQuads form) from a String.
> I've been paging around through RIOT and other parts of Jena, but I just
> can't seem to find any way to do this without building up a bunch of
> auxiliary objects (like Readers or StreamRDFs, etc.). Performance is
> something of a concern, so I'd rather not build up any more than I have to.
>
> Am I missing something, or do we just not expose that functionality? (I'm
> inclined to bet that we _have_ to have impled it somewhere, just for our
> own sanity, but maybe not!)
>
> ajs6f
>
>


Re: parse one quad?

2018-03-12 Thread Martynas Jusevičius
So what are you going to parse the quad into, if not Dataset?

On Mon, Mar 12, 2018 at 9:11 PM, ajs6f  wrote:

> Thanks, Martynas, but no; I don't have a Dataset (and don't need or want
> to build one for a single quad), and no InputStream (although I could get
> one from a String without too much fuss.
>
> RDFDataMgr or RDFParser are usually the best tools for parsing, but I'm
> looking for something a bit lighter-weight.
>
> ajs6f
>
> > On Mar 12, 2018, at 4:07 PM, Martynas Jusevičius 
> wrote:
> >
> > Maybe this?
> > https://jena.apache.org/documentation/javadoc/arq/org/
> apache/jena/riot/RDFDataMgr.html#read-org.apache.jena.
> query.Dataset-java.io.InputStream-org.apache.jena.riot.Lang-
> >
> > On Mon, Mar 12, 2018 at 8:46 PM, ajs6f  wrote:
> >
> >> I've got a use case for parsing one quad (in NQuads form) from a String.
> >> I've been paging around through RIOT and other parts of Jena, but I just
> >> can't seem to find any way to do this without building up a bunch of
> >> auxiliary objects (like Readers or StreamRDFs, etc.). Performance is
> >> something of a concern, so I'd rather not build up any more than I have
> to.
> >>
> >> Am I missing something, or do we just not expose that functionality?
> (I'm
> >> inclined to bet that we _have_ to have impled it somewhere, just for our
> >> own sanity, but maybe not!)
> >>
> >> ajs6f
> >>
> >>
>
>


Re: [3.0.1] ResultSetFactory.fromJSON() won't parse ASK JSON result

2018-03-14 Thread Martynas Jusevičius
Andy,

I don't think that helps much. In fact, I think treating ASK result
differently from SELECT breaks some abstractions.

What I mean is that the result data structure normally maps to a media type
and not its query form. That way we can have generic parsers/serializers
that are orthogonal to application logic, for example:

MessageBodyReader: application/rdf+xml, text/turtle,
application/n-triples...
MessageBodyReader: application/n-quads...
MessageBodyReader: application/sparql-results+xml,
application/sparql-results+json...

Jena's treatment of ASK result breaks this pattern, because it maps to the
same media types as ResultSet does, but there is no way to parse it as
such. Do you see what I mean?

SPARQLResult does not help, because MessageBodyReader makes
little sense.

Why not have ResultSet.getBoolean() or something?

On Mon, Mar 12, 2018 at 12:17 PM, Andy Seaborne  wrote:

> JSONInput.make(InputStream) -> SPARQLResult
>
> Andy
>
>
> On 12/03/18 10:13, Martynas Jusevičius wrote:
>
>> Hi Andy,
>>
>> I'm not using QueryExecution here, I'm trying to parse JSON read from HTTP
>> InputStream using ResultSetFactory.fromJSON().
>>
>> Then I want to carry the result set, maybe do some logic based on it, and
>> possibly serialize it back using ResultSetFormatter.
>>
>> Is that not possible with ASK result?
>>
>> On Mon, Mar 12, 2018 at 9:46 AM, Andy Seaborne  wrote:
>>
>>
>>>
>>> On 11/03/18 23:03, Martynas Jusevičius wrote:
>>>
>>> Hi,
>>>>
>>>> I'm getting the following JSON result from an ASK query:
>>>>
>>>> { "head": {}, "boolean": true }
>>>>
>>>> However, the method that usually works fine, will not parse it from
>>>> InputStream (Jena 3.0.1):
>>>>
>>>>   org.apache.jena.sparql.resultset.ResultSetException: Not a
>>>> ResultSet
>>>> result
>>>> org.apache.jena.sparql.resultset.SPARQLResult.getResultSet(
>>>> SPARQLResult.java:94)
>>>> org.apache.jena.sparql.resultset.JSONInput.fromJSON(JSONInput.java:64)
>>>> org.apache.jena.query.ResultSetFactory.fromJSON(ResultSetFac
>>>> tory.java:331)
>>>>
>>>> I stepped inside the code and I see that JSONObject is parsed fine, but
>>>> afterwards SPARQLResult.resultSet field is not being set for some
>>>> reason.
>>>>
>>>> Any ideas?
>>>>
>>>>
>>> The outcome of an ASK query is a boolean, not a ResultSet.
>>>
>>> See execAsk.
>>>
>>> SPARQLResult is the class for a holder of any SPARQL result type.
>>>
>>>  Andy
>>>
>>>
>>>
>>>> Martynas
>>>>
>>>>
>>>>
>>


Re: Vocabularies for actions

2018-03-17 Thread Martynas Jusevičius
There are no "actions" (verbs) on Linked Data really, only resources
(nouns). They have descriptions that you can retrieve/change using generic
"verbs": HTTP methods GET, POST, PUT, DELETE.

You can remodel your actions simply in terms of appending and updating
resource descriptions.

On Sat, Mar 17, 2018 at 5:20 PM, Laura Morales  wrote:

> Hi,
>
> all the vocabularies that I know of (and that I can find) seem to
> *describe* something, like a Person. Is there any vocabulary to *ask for*
> an action? For instance in the context of a source code repository, is
> there a dictionary to describe an "action to be performed" by a machine
> such as "create new  by " or "add  to " or
> "fork  to "?
>
> Thanks.
>


Re: Example code

2018-03-18 Thread Martynas Jusevičius
You can take a look here, these projects use Jena extensively:
https://github.com/AtomGraph/Core
https://github.com/AtomGraph/Processor
https://github.com/AtomGraph/Web-Client

On Sun, Mar 18, 2018 at 4:19 AM, David Moss  wrote:

> Nearly all the example code on the web for Jena is restatement of Javadoc.
> This is better than nothing, but what seems to be missing is examples of
> how Jena is used in real-world applications.
> I believe publishing practical examples would dramatically increase the
> use of Jena and semantic processing in general.
>
> Without a pool of practical examples people using Jena are working in
> isolation.
> There are so many ways of achieving results, but which are the ways people
> are actually using in real applications?
>
> For example, when using data from a SPARQL endpoint, what is the accepted
> way to retrieve it, store it locally and make it available through user
> interface controls?
> As far as I can tell there are no examples of how to do this anywhere.
>
> ie How would you go about populating a dropdown list in a UI with data
> from a SPARQL endpoint?
>
> Am I missing something (I hope so!)
> If not, does anyone want to contribute some examples of using Jena in
> real-world applications?
>
> If it turns out there are no such examples out there and no-one wants to
> contribute examples, I will write some myself. But they will be awful!
> I’d much rather people with experience provided this much needed
> information. I’m happy to collate and publish.
>
> DM
>
>
>
>
>
>
>
>


Re: Example code

2018-03-19 Thread Martynas Jusevičius
David,

I gave you links but I take you haven't looked. The Web-Client project
specifically renders RDF as HTML. The crucial class is this:
https://github.com/AtomGraph/Web-Client/blob/master/src/main/java/com/atomgraph/client/writer/ModelXSLTWriter.java

If you are looking to write generic software, you definitely want to render
Model and not ResultSet. With ResultSet you only get a plain old table,
with all the graph relationships stripped away.

It also helps to think about the UI as a function of the data. HTML webpage
is just one more transformation applied to the Linked Data RDF description.

On Mon, Mar 19, 2018 at 12:33 PM, David Moss  wrote:

>
>
> On 19/3/18, 5:39 pm, "Lorenz Buehmann"  leipzig.de> wrote:
>
> >Well, isn't that the task of the UI logic? You get JSON-LD and now you
> >can visualize it. I don't really see the problem here?
>
> Therein lies the problem. I'm sure _you_ know how to do it.
> How does someone without experience in integrating Jena with UI know how
> to do it?
>
> >dataset -> query -> data -> visualization (table, graph, etc.)
>
> Those are indeed a set of steps. Do you have an example of how to do that
> in java code and load the result into a combobox for selection in a UI?
>
> >Why should this be an example on the Apache Jena documentation?
>
> It shouldn't. It should be stored separately from the Apache Jena
> documentation.
> The Javadoc is for how Jena works internally and how to maintain Jena
> itself.
> I'm talking about examples to help people use Jena in the kind of
> applications people want to use.
>
> One of the dilemmas I have regarding Jena is how to store query results
> locally.
> I could use Jena to query an endpoint, iterate through the ResultSet and
> build POJOs or Tables.
> Or is it better to keep the results in a Model and query that again to
> build UI components?
> Or maybe I should ditch the fancy Jena objects and just get a result as a
> JSON object and work with that?
>
> These are all possibilities, but how is it actually being done in real
> projects? Where are the examples?
>
> A reply like "dataset -> query -> data -> visualization (table, graph,
> etc.)"  is very glib, but it doesn't actually have anything in the way of
> example code that can be used by people new to Jena in their own real-world
> programs. That is what I see as missing.
>
>
> DM
>
>
>
>
>
>
>
>
>
>
>
>
> On 19.03.2018 08:31, David Moss wrote:
> > That is certainly a way to get data from a SPARQL endpoint to
> display in a terminal window.
> > It does not store it locally or put it into a user-friendly GUI
> control however.
> > Looks like I might have to roll my own and face the music publicly
> if I'm doing it wrong.
> >
> > I think real-world examples of how to use Jena in a user friendly
> program are essential to advancing the semantic web.
> > Thanks for considering my question.
> >
> > DM
> >
> > On 19/3/18, 4:19 pm, "Laura Morales"  wrote:
> >
> > As far as I know the only way to query a Jena remotely is via
> HTTP. So, install Fuseki and then send a traditional HTTP GET/POST request
> to it with two parameters, "query" and "format". For example
> >
> > $ curl --data "format=json&query=..." http://your-endpoint.org
> >
> >
> >
> > Sent: Sunday, March 18, 2018 at 11:26 PM
> > From: "David Moss" 
> > To: users@jena.apache.org
> > Subject: Re: Example code
> >
> > On 18/3/18, 6:24 pm, "Laura Morales"  wrote:
> >
> > >> For example, when using data from a SPARQL endpoint, what is
> the accepted
> > >> way to retrieve it, store it locally and make it available
> through user
> > >> interface controls?
> >
> > >Make a query that returns a jsonld document.
> >
> > How? Do you have some example code showing how this query is
> retrieved, dealt with locally and made available to an end user through a
> GUI control?
> > What I am looking for here is a bridge between what experts
> glean from reading Javadoc and what ordinary people need to use Jena within
> a GUI based application.
> >
> > I see this kind of example as the missing link that prevents
> anyone other than expert using Jena.
> > So long as easy to follow examples of how to get from an rdf
> triplestore to information displayed on a screen in a standard GUI way are
> missing, Jena will remain a plaything for expert enthusiasts.
> >
> > DM
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
>
>
>
>
>


Re: Example code

2018-03-20 Thread Martynas Jusevičius
David,

the actual UI rendering is done not by Java but by XSLT stylesheets that
render RDF/XML:
https://github.com/AtomGraph/Web-Client/blob/master/src/main/webapp/static/com/atomgraph/client/xsl/bootstrap/2.3.2/layout.xsl

The stylesheet is invoked by the ModelXSLTWriter provider I mentioned
earlier.

On Tue, Mar 20, 2018 at 12:22 PM, David Moss  wrote:

>
>
> On 19/3/18, 9:40 pm, "Martynas Jusevičius" 
> wrote:
>
> David,
>
> >I gave you links but I take you haven't looked. The Web-Client project
>> specifically renders RDF as HTML. The crucial class is this:
>> https://github.com/AtomGraph/Web-Client/blob/master/src/
> main/java/com/atomgraph/client/writer/ModelXSLTWriter.java
>
> Actually, I have looked. The code you mentioned is over 400 lines long and
> I can't see a UI component in it.
> I have to admire your self-documenting coding style, but it is not the
> easy to follow example I was looking for.
>
>> If you are looking to write generic software, you definitely want to
> render
>>Model and not ResultSet. With ResultSet you only get a plain old table,
>> with all the graph relationships stripped away.
>
> I've suspected as much. By iterating over ResultSet I may as well use a
> relational database instead of a semantic model.
> It is frustrating! I know intuitively using a semantic model is richer but
> without examples I'm reinventing a wheel that took teams of smarter people
> than me years to invent the first time.
>
> >It also helps to think about the UI as a function of the data. HTML
> webpage
> >is just one more transformation applied to the Linked Data RDF
> description.
>
> Again, easy to say. Probably easy for YOU to do. Bet where are the easy to
> follow example code in small enough bites for the beginner to follow?
>
> I'm pretty much resigned to there not being any. I will try to write some.
>
> DM
>
>
>
> On Mon, Mar 19, 2018 at 12:33 PM, David Moss 
> wrote:
>
> >
> >
> > On 19/3/18, 5:39 pm, "Lorenz Buehmann"  > leipzig.de> wrote:
> >
> > >Well, isn't that the task of the UI logic? You get JSON-LD and
> now you
> > >can visualize it. I don't really see the problem here?
> >
> > Therein lies the problem. I'm sure _you_ know how to do it.
> > How does someone without experience in integrating Jena with UI know
> how
> > to do it?
> >
> > >dataset -> query -> data -> visualization (table, graph, etc.)
> >
> > Those are indeed a set of steps. Do you have an example of how to do
> that
> > in java code and load the result into a combobox for selection in a
> UI?
> >
> > >Why should this be an example on the Apache Jena documentation?
> >
> > It shouldn't. It should be stored separately from the Apache Jena
> > documentation.
> > The Javadoc is for how Jena works internally and how to maintain Jena
> > itself.
> > I'm talking about examples to help people use Jena in the kind of
> > applications people want to use.
> >
> > One of the dilemmas I have regarding Jena is how to store query
> results
> > locally.
> > I could use Jena to query an endpoint, iterate through the ResultSet
> and
> > build POJOs or Tables.
> > Or is it better to keep the results in a Model and query that again
> to
> > build UI components?
> > Or maybe I should ditch the fancy Jena objects and just get a result
> as a
> > JSON object and work with that?
> >
> > These are all possibilities, but how is it actually being done in
> real
> > projects? Where are the examples?
> >
> > A reply like "dataset -> query -> data -> visualization (table,
> graph,
> > etc.)"  is very glib, but it doesn't actually have anything in the
> way of
> > example code that can be used by people new to Jena in their own
> real-world
> > programs. That is what I see as missing.
> >
> >
> > DM
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > On 19.03.2018 08:31, David Moss wrote:
> > > That is certainly a way to get data from a SPARQL endpoint to
> > display in a terminal window.
> > > It does not store it locally or put it into a user-friendly GUI
>  

Re: Splitting data into graphs vs datasets

2018-03-20 Thread Martynas Jusevičius
Provenance. With named graphs, it's easier to track where data came from:
who imported it, when etc.
You can also have meta-graphs about other graphs.

Also editing and updating data. You can load named graph contents (of
smallish size) in an editor, make changes and then store a new version in
the same graph. You probably would not want to do this with a large default
graph.

On Tue, Mar 20, 2018 at 1:16 PM, Mikael Pesonen 
wrote:

>
> Hi,
>
> I'm using Fuseki GSP, and so far have put all data into one default
> dataset and using graphs to split it.
>
> If I'm right there would be benefits using more than one dataset
> - better performance - each query is done inside a dataset so less data =
> faster query
> - protection of data - can't "accidentaly" query data from other datasets
> Downsides:
> - combining data from various datasets is heavier task
>
> Is this correct? Any other things that should be considered?
>
> Thank you
>
> --
> Lingsoft - 30 years of Leading Language Management
>
> www.lingsoft.fi
>
> Speech Applications - Language Management - Translation - Reader's and
> Writer's Tools - Text Tools - E-books and M-books
>
> Mikael Pesonen
> System Engineer
>
> e-mail: mikael.peso...@lingsoft.fi
> Tel. +358 2 279 3300
>
> Time zone: GMT+2
>
> Helsinki Office
> Eteläranta 10
> 
> FI-00130 Helsinki
> FINLAND
>
> Turku Office
> Kauppiaskatu 5 A
> 
> FI-20100 Turku
> FINLAND
>
>


Re: Reciprocal relation output with RDF/XML

2018-03-26 Thread Martynas Jusevičius
Maybe you should provide some examples?

On Sun, Mar 25, 2018 at 4:07 PM, Bardo Nelgen <
mailing.list.in...@bnnperformances.de> wrote:

>
> Hi all,
>
> is there any insight, if there are reciprocal connections between two
> things (like with dct:references and dct:isReferencedBy), how Jena weighs
> in the one against the other when creating the "pretty" XML output from a
> SPARQL query ?
>
> Currently my results appear to "tilt" from one view to the other,
> obviously somehow depending on the data contained – and thereby completely
> ignoring the hierarchy assumed by the SPARQL‘s "construct" section.
>
> Is there any documentation on enforcing a particular "perspective" on the
> result (if possible at all…) ?
>
> The output is to be processed by an XML-based website system – and parsing
> "plain" RDF/XML is not an option there.
>
> As always, any hints are highly appreciated.
>
> Best,
>
> Bardo
>
>


Re: Parameterized queries

2018-03-26 Thread Martynas Jusevičius
You would be better of asking on the Python rdflib mailing list then.

Looks like prepared queries could help:
https://rdflib.readthedocs.io/en/stable/intro_to_sparql.html#prepared-queries

On Mon, Mar 26, 2018 at 2:10 PM, Laura Morales  wrote:

> Yes but I'd need this to work from Python. Thanks for the reference anyway.
>
>
>
>
> Sent: Monday, March 26, 2018 at 1:36 PM
> From: "agate.m...@gmail.com" 
> To: users@jena.apache.org
> Subject: Re: Parameterized queries
>
> On 2018/03/26 07:30:27, "Laura Morales"  wrote:
> > Is it possible to send a parameterized query to fuseki? I mean sending a
> query along with a list of parameters, more or less like this
> >
> > format=json
> > query="select * where { $sbj a [] }"
> > sbj=""
> >
> > similar to SQL parameterized queries, where parameters are automatically
> escaped in order to prevent injection attacks.
> >
> > I know this would be more of a client issue than server, but I can't
> find any library that does this, so I was wondering if Fuseki has anything
> like this built in. In particular, I'd need a library for Python. Do you
> guys know any by chance?
> >
>
> Hi,
>
> Did you take a look at org.apache.jena.query.ParameterizedSparqlString ?
> We are using it for all the query templates we've created (
> http://purl.bdrc.io[http://purl.bdrc.io])
>
>
> Marc
>


Re: reading a JSON string from sparql results

2018-03-26 Thread Martynas Jusevičius
Take a look at ResultSetFactory:
https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/query/ResultSetFactory.html

On Mon, Mar 26, 2018 at 6:48 PM, Élie Roux 
wrote:

> Dear All,
>
> I'm trying to transform a String in the format of a SPARQL Select JSON
> Result into a org.apache.jena.query.ResultSet, but it proves much more
> difficult than I anticipated. What I'm trying to achieve is really
> starting with a json string, I can't use a QueryExecution.execSelect().
>
> I initially tried:
>
> ResultSet res = ResultSetMgr.read(myjsonstring, Lang.JSONLD);
>
> but the second argument is obviously wrong and there doesn't seem to be
> a Lang for the SPARQL json result format.
>
> I then tried to understand how the ResultSet was constructed in
> QueryExecution.execSelect, but I have to admit I got lost very quickly
> as there are many classes I'm not familiar with (Plan for instance)...
>
> Is there something obvious (or non-obvious) that I missed? Is it
> possible to achieve what I want?
>
> Thank you,
> --
> Elie
>


Re: CONSTRUCT ... ORDER BY

2018-03-29 Thread Martynas Jusevičius
Yes, SELECT.

Or you can use rdf:List to store an ordered list.
On Thu, 29 Mar 2018 at 09.02, Laura Morales  wrote:

> OK, makes sense.
> Is there no way however to return a sorted list of triples? Only SELECT
> can return sorted results?
>
>
>
>
> Sent: Thursday, March 29, 2018 at 8:47 AM
> From: "Lorenz Buehmann" 
> To: users@jena.apache.org
> Subject: Re: CONSTRUCT ... ORDER BY
> Nothing - by specification, CONSTRUCT returns an RDF graph which in fact
> is a *set* of triples. Set -> unordered
>
>
> On 29.03.2018 07:47, Laura Morales wrote:
> > I'm running this query
> >
> >
> > CONSTRUCT { ?sbj ex:property ?obj }
> > FROM <...>
> > WHERE {
> > ?sbj a [] ;
> > ex:property ?obj .
> > }
> > ORDER BY ?obj
> >
> >
> > but the results are not returned sorted. Actually, it looks like the
> results are sorted by ?sbj instead. What am I doing wrong?
>
>


Clearing and reloading ontology and its imports

2018-04-06 Thread Martynas Jusevičius
Hi,

we have an ontology editor that stores ontologies and their terms in a
triplestore. On the other end, they are being loaded by OntDocumentManager.

The question is: what is the proper way to clear and reload an ontology and
its imports in OntDocumentManager?

Because, for example, a user could totally change owl:imports statements of
an ontology, and we need this feature to be able to reload a completely new
OntModel without any old imported submodels.

Right now I'm simply doing the following:


OntDocumentManager.getInstance().getFileManager().removeCacheModel(ontologyURI)
  OntDocumentManager.getInstance().addModel(ontologyURI, infModel);
  OntModel ontModel = OntDocumentManager.getInstance().getOntology(ontologyURI,
ontModelSpec);

But this is not enough, because doing ontModel.listImportedOntologyURIs()
lists all the old imports from before.

Am I getting this right and what needs to be done to reload the imports as
well?

I can see there is a concept of dynamic imports in OntModel, but not sure
it's related.


Martynas


Re: Clearing and reloading ontology and its imports

2018-04-06 Thread Martynas Jusevičius
I guess dynamic imports is what I need. But will they get triggered if I
remove and then re-add the whole Model (as shown in my example), rather
than statements?

I could probably removeAll() statements from it instead of removing it from
cache.

On Fri, Apr 6, 2018 at 1:59 PM, Martynas Jusevičius 
wrote:

> Hi,
>
> we have an ontology editor that stores ontologies and their terms in a
> triplestore. On the other end, they are being loaded by OntDocumentManager.
>
> The question is: what is the proper way to clear and reload an ontology
> and its imports in OntDocumentManager?
>
> Because, for example, a user could totally change owl:imports statements
> of an ontology, and we need this feature to be able to reload a completely
> new OntModel without any old imported submodels.
>
> Right now I'm simply doing the following:
>
>   OntDocumentManager.getInstance().getFileManager()
> .removeCacheModel(ontologyURI)
>   OntDocumentManager.getInstance().addModel(ontologyURI, infModel);
>   OntModel ontModel = 
> OntDocumentManager.getInstance().getOntology(ontologyURI,
> ontModelSpec);
>
> But this is not enough, because doing ontModel.listImportedOntologyURIs()
> lists all the old imports from before.
>
> Am I getting this right and what needs to be done to reload the imports as
> well?
>
> I can see there is a concept of dynamic imports in OntModel, but not sure
> it's related.
>
>
> Martynas
>


Re: Clearing and reloading ontology and its imports

2018-04-07 Thread Martynas Jusevičius
Sorry for the noise, it looks like the listImportedOntologyURIs() was a red
herring -- the submodel count actually comes from InfModel which includes
the schema submodels.


But I was able to dig a little further, and observed after the code I have
shown is executed, there are inconsistencies in models for the same
ontology but accessed in different ways:

  ontology.getOntModel().getOntology("
https://localhost:4443/demo/iswc-2017/ns/templates#
").listImports().toList().toString()

  [http://atomgraph.com/ns/platform/templates#,
https://www.w3.org/ns/ldt/core/templates#]


  ontology.getOntModel().getDocumentManager().getOntology("
https://localhost:4443/demo/iswc-2017/ns/templates#";,
org.apache.jena.ontology.OntModelSpec.OWL_MEM).getOntology("
https://localhost:4443/demo/iswc-2017/ns/templates#
").listImports().toList().toString()

  [https://www.w3.org/ns/ldt/core/templates#]


Notice how imports for
https://localhost:4443/demo/iswc-2017/ns/templates# differ.
How is that possible, and how do I avoid this?

On Fri, Apr 6, 2018 at 3:19 PM, Martynas Jusevičius 
wrote:

> I guess dynamic imports is what I need. But will they get triggered if I
> remove and then re-add the whole Model (as shown in my example), rather
> than statements?
>
> I could probably removeAll() statements from it instead of removing it
> from cache.
>
> On Fri, Apr 6, 2018 at 1:59 PM, Martynas Jusevičius <
> marty...@atomgraph.com> wrote:
>
>> Hi,
>>
>> we have an ontology editor that stores ontologies and their terms in a
>> triplestore. On the other end, they are being loaded by OntDocumentManager.
>>
>> The question is: what is the proper way to clear and reload an ontology
>> and its imports in OntDocumentManager?
>>
>> Because, for example, a user could totally change owl:imports statements
>> of an ontology, and we need this feature to be able to reload a completely
>> new OntModel without any old imported submodels.
>>
>> Right now I'm simply doing the following:
>>
>>   OntDocumentManager.getInstance().getFileManager().
>> removeCacheModel(ontologyURI)
>>   OntDocumentManager.getInstance().addModel(ontologyURI, infModel);
>>   OntModel ontModel = 
>> OntDocumentManager.getInstance().getOntology(ontologyURI,
>> ontModelSpec);
>>
>> But this is not enough, because doing ontModel.listImportedOntologyURIs()
>> lists all the old imports from before.
>>
>> Am I getting this right and what needs to be done to reload the imports
>> as well?
>>
>> I can see there is a concept of dynamic imports in OntModel, but not sure
>> it's related.
>>
>>
>> Martynas
>>
>
>


Re: linked data and URLs

2018-05-22 Thread Martynas Jusevičius
Why generate URIs at all in the beginning, can't you use blank nodes?

Rewriting URIs is generally a bad idea in a Linked Data setting. Make one
datasource canonical and let the other one deal with that. Or maybe you can
configure your proxy in a way that hides the port number and you don't need
the second version (just a guess).

Also, you generate document URIs, and persons are not documents. Hash URIs
would probably be best for persons.

If you choose to ignore the above, this method might help you:
https://jena.apache.org/documentation/javadoc/jena/org/apache/jena/util/ResourceUtils.html#renameResource-org.apache.jena.rdf.model.Resource-java.lang.String-

On Tue, May 22, 2018 at 1:00 PM, Claude Warren  wrote:

> I have what I think may  be a common problem and am looking for suggested
> patterns and anti-patterns for a solution.
>
> For the sake of this example let's assume that the system described creates
> FOAF records.
>
> == PROBLEM
>
> Backend:
>
> The backend system generates FOAF records but does not have any information
> about where they will be stored/deployed.  So it generates records like
>
>  a foaf:Person ;
> foaf:name "Jimmy Wales" ;
> foaf:mbox  ;
> foaf:homepage  ;
> foaf:nick "Jimbo" ;
> foaf:depiction  ;
> foaf:interest  ;
> foaf:knows  .
>
>  a foaf:Person ;
> foaf:name "Angela Beesley" .
>
>
> This data is stored in a Fuseki based server.
>
> Frontend 1:
>
> The front end should take replace the  based URIs with http
> based URIs that point to the frontend.  So assuming the frontend is at
> http://frontend:8080 and has a method to return RDF in turtle format the
> RDF should look like
>
>   a foaf:Person ;
>  foaf:name "Jimmy Wales" ;
>  foaf:mbox  ;
>  foaf:homepage  ;
>  foaf:nick "Jimbo" ;
>  foaf:depiction  ;
>  foaf:interest  ;
>  foaf:knows  .
>
>   a foaf:Person ;
>  foaf:name "Angela Beesley" .
>
> Frontend 2:
>
> There is a second frontend with a different URL. http://frontend2:8080,
> frontend 1 and frontend 2.  Frontend 2 does not have access to frontend 1
> (assume that there is a firewall that prohibits the access).   Frontend2
> should produce RDF like:
>
>   a foaf:Person ;
>  foaf:name "Jimmy Wales" ;
>  foaf:mbox  ;
>  foaf:homepage  ;
>  foaf:nick "Jimbo" ;
>  foaf:depiction  ;
>  foaf:interest  ;
>  foaf:knows  .
>
>   a foaf:Person ;
>  foaf:name "Angela Beesley" .
>
> == Question
>
> How can I setup a system that will automatically convert one URI to another
> without storing multiple copies of the data (e.g. not multiple datasets).
> I have thought about using owl:sameAs and am wondering if there is a
> reasoner that will process it.
>
> Anyway, has anyone else come across this problem (I figure so) and does
> anyone have a possible solution?
>
>
>
>
> Thx,
> Claude
> --
> I like: Like Like - The likeliest place on the web
> 
> LinkedIn: http://www.linkedin.com/in/claudewarren
>


Re: Update Query Parsing error

2018-05-22 Thread Martynas Jusevičius
I think you should open a JIRA ticket.

On Tue, May 22, 2018 at 4:53 PM, Bart van Leeuwen <
bart_van_leeu...@netage.nl> wrote:

> Hi,
>
> Although you are correct this should be ?g in both cases, it does not fix
> the issue.
>
> Met Vriendelijke Groet / With Kind Regards
> Bart van Leeuwen
>
>
> twitter: @semanticfire
> tel. +31(0)6-53182997
> Netage B.V.
> http://netage.nl
> Esdoornstraat 3
> 3461ER Linschoten
> The Netherlands
>
>
>
>
> From:Paul Hermans 
> To:"users@jena.apache.org" 
> Date:22-05-2018 15:38
> Subject:Re: Update Query Parsing error
> --
>
>
>
> Bart,
>
> Graph ?x <-> graph ?g ?
>
>
> Paul
>
>
> On 22 May 2018 at 15:24:33, Bart van Leeuwen (bart_van_leeu...@netage.nl<
> mailto:bart_van_leeu...@netage.nl >) wrote:
>
> Hi,
>
> On Jena 3.7.0
>
> The following the query ( which is accepted by Stardog on the console )
>
> DELETE { Graph ?g { ?resultset  insight#organization>   BrandweerAmsterdamAmstelland>}}
> WHERE { Graph ?x {
>?resultset a  .
>?resultset   "
> 00b36dabd8f846f6b53375abf5fe8ad9"
> }}
>
> causes the following error:
>
> Exception in thread "main" org.apache.jena.query.QueryException
>at org.apache.jena.sparql.lang.ParserSPARQL11Update._parse(
> ParserSPARQL11Update.java:84)
>at org.apache.jena.sparql.lang.ParserSPARQL11Update.parse$(
> ParserSPARQL11Update.java:40)
>at org.apache.jena.sparql.lang.UpdateParser.parse(
> UpdateParser.java:39)
>at org.apache.jena.update.UpdateFactory.make(UpdateFactory.java:87)
>at org.apache.jena.update.UpdateFactory.create(
> UpdateFactory.java:78)
>at org.apache.jena.update.UpdateFactory.create(
> UpdateFactory.java:56)
>at org.apache.jena.update.UpdateFactory.create(
> UpdateFactory.java:46)
>at com.complexible.stardog.examples.jena.App.main(App.java:90)
> Caused by: java.lang.NullPointerException
>at org.apache.jena.query.ARQ.isTrue(ARQ.java:650)
>at org.apache.jena.sparql.lang.ParserBase.(ParserBase.
> java:292)
>at org.apache.jena.sparql.lang.SPARQLParserBase.(
> SPARQLParserBase.java:43)
>at org.apache.jena.sparql.lang.sparql_11.SPARQLParser11Base.<
> init>(SPARQLParser11Base.java:22)
>at org.apache.jena.sparql.lang.sparql_11.SPARQLParser11.<
> init>(SPARQLParser11.java:4974)
>at org.apache.jena.sparql.lang.ParserSPARQL11Update._parse(
> ParserSPARQL11Update.java:57)
>... 7 more
>
> Met Vriendelijke Groet / With Kind Regards
> Bart van Leeuwen
>
> [cid:_2_0CAA71FC0CAA6FA40049A159C1258295]
> twitter: @semanticfire
> tel. +31(0)6-53182997
> Netage B.V.
> http://netage.nl
> Esdoornstraat 3
> 3461ER Linschoten
> The Netherlands
> [cid:_2_0CAA86AC0CAA84240049A159C1258295]
> Kind Regards,
> Paul Hermans
> -
> ProXML bvba
> Linked Data services
> KBO: http://data.kbodata.be/organisation/0476_068_080#id
> (w) www.proxml.be
> (e) p...@proxml.be
> (tw) @PaulZH
> (t) +32 15 23 00 76
> (m) +32 473 66 03 20
> Narcisweg 17
> 
> 3140 Keerbergen
> Belgium
>
> ODEdu – Innovative Open Data Education and Training based on PBL and
> Learning Analytics - http://odedu-project.eu/
> OpenGovIntelligence – Public Administration Modernization by exploiting
> Linked Open Statistical Data - http://www.opengovintelligence.eu www.opengovintelligence.eu/>
> OpenCube – Linked Open Statistical Data - http://opencube-project.eu/
>
>
>


Re: Fuseki user-defined Web Services

2018-05-24 Thread Martynas Jusevičius
I had long ago suggested that Jena should build on JAX-RS, which is the
RESTful API for Java.

You can see how that can be done here:
https://github.com/AtomGraph/Core/blob/master/src/main/java/com/atomgraph/core/model/impl/QueriedResourceBase.java

On Thu, May 24, 2018 at 4:19 PM, Piotr Nowara  wrote:

> Hi,
>
> is there any documentation describing the new Fuseki capability of handling
> the user-defined services?
>
> The 3.7.0 release info says: "JENA-1435: Provide extensibility of Fuseki
> with new services. It is now possible to add custom services to a Fuseki
> service, not just the services provided by the Fuseki distribution."
>
> Does this mean I can create my own REST Web Service and host it using
> Fuseki?
>
> Thanks,
> Piotr
>


Re: Fuseki user-defined Web Services

2018-05-24 Thread Martynas Jusevičius
No, just this:
https://www.mail-archive.com/users@jena.apache.org/msg08805.html

On Thu, May 24, 2018 at 5:05 PM, Adam Soroka  wrote:

> Was there a PR associated with that suggestion?
>
> Adam
>
> On 2018/05/24 14:29:51, Martynas Jusevičius 
> wrote:
> > I had long ago suggested that Jena should build on JAX-RS, which is the
> > RESTful API for Java.
> >
> > You can see how that can be done here:
> > https://github.com/AtomGraph/Core/blob/master/src/main/
> java/com/atomgraph/core/model/impl/QueriedResourceBase.java
> >
> > On Thu, May 24, 2018 at 4:19 PM, Piotr Nowara 
> wrote:
> >
> > > Hi,
> > >
> > > is there any documentation describing the new Fuseki capability of
> handling
> > > the user-defined services?
> > >
> > > The 3.7.0 release info says: "JENA-1435: Provide extensibility of
> Fuseki
> > > with new services. It is now possible to add custom services to a
> Fuseki
> > > service, not just the services provided by the Fuseki distribution."
> > >
> > > Does this mean I can create my own REST Web Service and host it using
> > > Fuseki?
> > >
> > > Thanks,
> > > Piotr
> > >
> >
>


Re: Fuseki user-defined Web Services

2018-05-24 Thread Martynas Jusevičius
We have implemented SPARQL protocol and GSP (maybe not 100%, but enough for
our use) over JAX-RS:
https://github.com/AtomGraph/Core/tree/master/src/main/java/com/atomgraph/core/model


On Thu, May 24, 2018 at 5:32 PM, Andy Seaborne  wrote:

> A somewhat different form of customization.
>
> JENA-1435 means it is possible to have "/dataset/MyService" and provide
> the code for MyService without having to modify Fuseki source code.
>
> A JAX-RS module could use that to plug in.
>
> It could not be used for the main Fuseki dispatch without significant
> replumbing of JAX-RS as Fuseki dispatch changes as datasets are added and
> deleted. Fuseki needs to respect the SPARQL protocols 9query, update, GSP).
>
> Andy
>
>
> On 24/05/18 16:12, Martynas Jusevičius wrote:
>
>> No, just this:
>> https://www.mail-archive.com/users@jena.apache.org/msg08805.html
>>
>> On Thu, May 24, 2018 at 5:05 PM, Adam Soroka  wrote:
>>
>> Was there a PR associated with that suggestion?
>>>
>>> Adam
>>>
>>> On 2018/05/24 14:29:51, Martynas Jusevičius 
>>> wrote:
>>>
>>>> I had long ago suggested that Jena should build on JAX-RS, which is the
>>>> RESTful API for Java.
>>>>
>>>> You can see how that can be done here:
>>>> https://github.com/AtomGraph/Core/blob/master/src/main/
>>>>
>>> java/com/atomgraph/core/model/impl/QueriedResourceBase.java
>>>
>>>>
>>>> On Thu, May 24, 2018 at 4:19 PM, Piotr Nowara 
>>>>
>>> wrote:
>>>
>>>>
>>>> Hi,
>>>>>
>>>>> is there any documentation describing the new Fuseki capability of
>>>>>
>>>> handling
>>>
>>>> the user-defined services?
>>>>>
>>>>> The 3.7.0 release info says: "JENA-1435: Provide extensibility of
>>>>>
>>>> Fuseki
>>>
>>>> with new services. It is now possible to add custom services to a
>>>>>
>>>> Fuseki
>>>
>>>> service, not just the services provided by the Fuseki distribution."
>>>>>
>>>>> Does this mean I can create my own REST Web Service and host it using
>>>>> Fuseki?
>>>>>
>>>>> Thanks,
>>>>> Piotr
>>>>>
>>>>>
>>>>
>>>
>>


Re: Jena writing TURTLE instead of TRIG

2018-06-03 Thread Martynas Jusevičius
TriG without named graphs is Turtle, AFAIK.

And you can't have named graphs in Model since it only contains triples,
not quads.

If you want named graphs (but you're not using them right now), you should
look into Dataset:
https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/query/Dataset.html

On Sun, Jun 3, 2018 at 1:43 PM, agate.m...@gmail.com 
wrote:

> Hi,
>
> There is obviously an issue with the Model.write method for TriG format.
>
> Run the following to see that ttl is returned when Trig is
> requested...Formats names are based on https://jena.apache.org/
> documentation/io/
>
> package io.bdrc.ldspdi.test;
>
> import org.apache.jena.rdf.model.Model;
> import org.apache.jena.rdf.model.ModelFactory;
>
> public class WriteTrigTest {
>
> public static void main(String[] args) {
> Model mod=ModelFactory.createDefaultModel();
> mod.read("http://purl.bdrc.io/resource/T00AG0281.ttl";);
> System.out.println("* TURTLE
> **");
> mod.write(System.out,"TURTLE");
> System.out.println("* JSON
>  **");
> mod.write(System.out,"RDF/JSON");
> System.out.println("* TRIG
>  **");
> mod.write(System.out,"TriG");
> }
>
> }
>
> Marc
>


Re: Jena writing TURTLE instead of TRIG

2018-06-03 Thread Martynas Jusevičius
Try reading RDF 1.1 Primer, it should help:
https://www.w3.org/TR/rdf11-primer/#section-trig

BTW use Factory classes as Andy mentioned, do now instantiate *Impl classes
yourself.

On Sun, Jun 3, 2018 at 2:45 PM, agate.m...@gmail.com 
wrote:

>
>
> On 2018/06/03 12:29:25, Andy Seaborne  wrote:
> >
> >
> > On 03/06/18 13:21, agate.m...@gmail.com wrote:
> > >
> > >
> > > On 2018/06/03 11:46:26, Martynas Jusevičius 
> wrote:
> > >> TriG without named graphs is Turtle, AFAIK.
> > >>
> > >> And you can't have named graphs in Model since it only contains
> triples,
> > >> not quads.
> > >>
> > >> If you want named graphs (but you're not using them right now), you
> should
> > >> look into Dataset:
> > >> https://jena.apache.org/documentation/javadoc/arq/org/
> apache/jena/query/Dataset.html
> > >>
> > >> On Sun, Jun 3, 2018 at 1:43 PM, agate.m...@gmail.com <
> agate.m...@gmail.com>
> > >> wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> There is obviously an issue with the Model.write method for TriG
> format.
> > >>>
> > >>> Run the following to see that ttl is returned when Trig is
> > >>> requested...Formats names are based on https://jena.apache.org/
> > >>> documentation/io/
> > >>>
> > >>> package io.bdrc.ldspdi.test;
> > >>>
> > >>> import org.apache.jena.rdf.model.Model;
> > >>> import org.apache.jena.rdf.model.ModelFactory;
> > >>>
> > >>> public class WriteTrigTest {
> > >>>
> > >>>  public static void main(String[] args) {
> > >>>  Model mod=ModelFactory.createDefaultModel();
> > >>>  mod.read("http://purl.bdrc.io/resource/T00AG0281.ttl";);
> > >>>  System.out.println("* TURTLE
> > >>> **");
> > >>>  mod.write(System.out,"TURTLE");
> > >>>  System.out.println("* JSON
> > >>>   **");
> > >>>  mod.write(System.out,"RDF/JSON");
> > >>>  System.out.println("* TRIG
> > >>>   **");
> > >>>  mod.write(System.out,"TriG");
> > >>>  }
> > >>>
> > >>> }
> > >>>
> > >>> Marc
> > >>>
> > >>
> > > Thanks ! It is not clear in the docs that printing Trig format
> requires a Named graph.
> >
> > It doesn't.
> >
> > There is no requirement in TriG to write the default graph inside {}.
> >
> > So writing a model as TriG is the same as writing it as Turtle.  The
> > formats overlap (by design of the specs). A model on its own is treated
> > as a dataset with a default graph.
> >
> > 
> > Writing TriG:
> > The prefixes are prefixes are the prfixes of the default graph.
> >
> >  > The following code does the job (but I am loosing the Prefix Map in
> > the process... any clue ?):
> > >
> > > public class WriteTrigTest {
> > >
> > >  public static void main(String[] args) {
> > >  Model mod=ModelFactory.createDefaultModel();
> > >  mod.read("http://purl.bdrc.io/resource/T00AG0281.ttl";);
> > >  DatasetImpl impl=new DatasetImpl(ModelFactory.
> createDefaultModel());
> >
> > Dataset ds = DatasetFactory.create
> >
> > >  impl.addNamedModel("http://purl.bdrc.io/resource/T00AG0281";,
> mod);
> > >  RDFDataMgr.write(System.out, impl.asDatasetGraph(),
> Lang.TRIG) ;
> >
> > ds.getDefaultModel().setNsPrefixes(mod);
> >
> > >  }
> > >
> > > }
> > >
> >
> Ok, I got that. Then why bother with two formats (Trig and Turtle) if they
> are the same ? Is it the case that all valid Turtle are also valid Trig and
> vice versa ? It's not clear and it looks like the difference between Trig
> and Turtle is related to "named graph" (i.e Trig is better suited to
> serialize a named graph):
>
> this works:
>
> DatasetImpl impl=new DatasetImpl(ModelFactory.createDefaultModel());
> impl.addNamedModel("http://purl.bdrc.io/resource/T00AG0281";, mod);
> RDFDataMgr.write(System.out, impl.asDatasetGraph(), Lang.TRIG) ;
>
> while this fails:
>
> DatasetImpl impl=new DatasetImpl(ModelFactory.createDefaultModel());
> impl.addNamedModel("http://purl.bdrc.io/resource/T00AG0281";, mod);
> RDFDataMgr.write(System.out, impl.asDatasetGraph(), Lang.TURTLE) ;
>
> Marc
>


Re: Jena writing TURTLE instead of TRIG

2018-06-03 Thread Martynas Jusevičius
*do not

On Sun, Jun 3, 2018 at 2:48 PM, Martynas Jusevičius 
wrote:

> Try reading RDF 1.1 Primer, it should help: https://www.w3.org/TR/
> rdf11-primer/#section-trig
>
> BTW use Factory classes as Andy mentioned, do now instantiate *Impl
> classes yourself.
>
> On Sun, Jun 3, 2018 at 2:45 PM, agate.m...@gmail.com  > wrote:
>
>>
>>
>> On 2018/06/03 12:29:25, Andy Seaborne  wrote:
>> >
>> >
>> > On 03/06/18 13:21, agate.m...@gmail.com wrote:
>> > >
>> > >
>> > > On 2018/06/03 11:46:26, Martynas Jusevičius 
>> wrote:
>> > >> TriG without named graphs is Turtle, AFAIK.
>> > >>
>> > >> And you can't have named graphs in Model since it only contains
>> triples,
>> > >> not quads.
>> > >>
>> > >> If you want named graphs (but you're not using them right now), you
>> should
>> > >> look into Dataset:
>> > >> https://jena.apache.org/documentation/javadoc/arq/org/apache
>> /jena/query/Dataset.html
>> > >>
>> > >> On Sun, Jun 3, 2018 at 1:43 PM, agate.m...@gmail.com <
>> agate.m...@gmail.com>
>> > >> wrote:
>> > >>
>> > >>> Hi,
>> > >>>
>> > >>> There is obviously an issue with the Model.write method for TriG
>> format.
>> > >>>
>> > >>> Run the following to see that ttl is returned when Trig is
>> > >>> requested...Formats names are based on https://jena.apache.org/
>> > >>> documentation/io/
>> > >>>
>> > >>> package io.bdrc.ldspdi.test;
>> > >>>
>> > >>> import org.apache.jena.rdf.model.Model;
>> > >>> import org.apache.jena.rdf.model.ModelFactory;
>> > >>>
>> > >>> public class WriteTrigTest {
>> > >>>
>> > >>>  public static void main(String[] args) {
>> > >>>  Model mod=ModelFactory.createDefaultModel();
>> > >>>  mod.read("http://purl.bdrc.io/resource/T00AG0281.ttl";);
>> > >>>  System.out.println("* TURTLE
>> > >>> **");
>> > >>>  mod.write(System.out,"TURTLE");
>> > >>>  System.out.println("* JSON
>> > >>>   **");
>> > >>>  mod.write(System.out,"RDF/JSON");
>> > >>>  System.out.println("* TRIG
>> > >>>   **");
>> > >>>  mod.write(System.out,"TriG");
>> > >>>  }
>> > >>>
>> > >>> }
>> > >>>
>> > >>> Marc
>> > >>>
>> > >>
>> > > Thanks ! It is not clear in the docs that printing Trig format
>> requires a Named graph.
>> >
>> > It doesn't.
>> >
>> > There is no requirement in TriG to write the default graph inside {}.
>> >
>> > So writing a model as TriG is the same as writing it as Turtle.  The
>> > formats overlap (by design of the specs). A model on its own is treated
>> > as a dataset with a default graph.
>> >
>> > 
>> > Writing TriG:
>> > The prefixes are prefixes are the prfixes of the default graph.
>> >
>> >  > The following code does the job (but I am loosing the Prefix Map in
>> > the process... any clue ?):
>> > >
>> > > public class WriteTrigTest {
>> > >
>> > >  public static void main(String[] args) {
>> > >  Model mod=ModelFactory.createDefaultModel();
>> > >  mod.read("http://purl.bdrc.io/resource/T00AG0281.ttl";);
>> > >  DatasetImpl impl=new DatasetImpl(ModelFactory.creat
>> eDefaultModel());
>> >
>> > Dataset ds = DatasetFactory.create
>> >
>> > >  impl.addNamedModel("http://purl.bdrc.io/resource/T00AG0281";,
>> mod);
>> > >  RDFDataMgr.write(System.out, impl.asDatasetGraph(),
>> Lang.TRIG) ;
>> >
>> > ds.getDefaultModel().setNsPrefixes(mod);
>> >
>> > >  }
>> > >
>> > > }
>> > >
>> >
>> Ok, I got that. Then why bother with two formats (Trig and Turtle) if
>> they are the same ? Is it the case that all valid Turtle are also valid
>> Trig and vice versa ? It's not clear and it looks like the difference
>> between Trig and Turtle is related to "named graph" (i.e Trig is better
>> suited to serialize a named graph):
>>
>> this works:
>>
>> DatasetImpl impl=new DatasetImpl(ModelFactory.createDefaultModel());
>> impl.addNamedModel("http://purl.bdrc.io/resource/T00AG0281";,
>> mod);
>> RDFDataMgr.write(System.out, impl.asDatasetGraph(), Lang.TRIG) ;
>>
>> while this fails:
>>
>> DatasetImpl impl=new DatasetImpl(ModelFactory.createDefaultModel());
>> impl.addNamedModel("http://purl.bdrc.io/resource/T00AG0281";,
>> mod);
>> RDFDataMgr.write(System.out, impl.asDatasetGraph(), Lang.TURTLE) ;
>>
>> Marc
>>
>
>


Re: Customizing RDF/XML writer for quads

2018-06-13 Thread Martynas Jusevičius
TriX is of course better than nothing, but it's not really an adequate
replacement as its triple/quad based and not resource (/graph - which is
currently missing) based as RDF/XML.

There is a RAX Community Group at W3C that aimed to address the RDF and XML
intersection, but it hasn't produced much beyond the following document. My
arguments on why RDF/XML with graph support would be useful:
https://www.w3.org/community/rax/wiki/Draft_Material#XML_formats_for_RDF_datasets_.28quads.29

On Wed, Jun 13, 2018 at 9:41 PM, ajs6f  wrote:

> Jena can currently produce TriX:
>
> https://jena.apache.org/documentation/io/rdf-output.html#rdfformat
> http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdf
>
> which is not a W3C spec, but does indeed encode named graphs. Does that
> meet your needs?
>
> ajs6f
>
> > On Jun 13, 2018, at 8:38 AM, Alexandra Kokkinaki <
> alexandra.kokkin...@gmail.com> wrote:
> >
> > Dear all,
> >
> > I also want to use named graphs to capture provenance information on
> > triples but in RDF/XML serialization.
> > Iam replying Martynas email, as he first asked for that, wondering if
> > anything happened since then, and I haven't found it?
> >
> > Many thanks
> > Alexandra
> >
> > On Thu, Jun 9, 2016 at 4:58 PM, Martynas Jusevičius <
> marty...@atomgraph.com>
> > wrote:
> >
> >> I found the "abandoned" discussion on RDF 1.1 WG wiki:
> >> https://www.w3.org/2011/rdf-wg/wiki/TF-RDF-XML#Change_8:_
> >> named_graph_support_in_RDF-XML
> >>
> >> On Thu, Jun 9, 2016 at 5:26 PM, Martynas Jusevičius
> >>  wrote:
> >>> It seems that someone has thought about this before:
> >>> https://www.w3.org/Submission/rdfsource/
> >>>
> >>> TriX is just not a natural structure for XSLT transformations.
> >>>
> >>> On Thu, Jun 9, 2016 at 4:34 PM, Andy Seaborne  wrote:
> >>>>
> >>>>
> >>>> On 09/06/16 14:47, Martynas Jusevičius wrote:
> >>>>>
> >>>>> Good points. Yes TriG-like structure makes more sense -- but then it
> >>>>> is clearly non-standard.
> >>>>
> >>>>
> >>>> That's a good thing - no risk of wrong data or missing data.
> >>>>
> >>>> Using an attribute, rdfx:graph - won't it be a property if it is not
> >>>> understood as additional syntax attribute?
> >>>>
> >>>>>
> >>>>> Isn't this a gap in RDF standardization -- an XML format for quads?
> >>>>
> >>>>
> >>>> IIRC When it came down to it, no one was interested in spending time
> on
> >> it.
> >>>>
> >>>> There are (probably) some notes in the RDF 1.1 WG wiki.
> >>>>
> >>>> TriX is a de facto standard.
> >>>>
> >>>>Andy
> >>>>
> >>>>
> >>>>>
> >>>>> On Thu, Jun 9, 2016 at 1:51 PM, Andy Seaborne 
> wrote:
> >>>>>>
> >>>>>> On 08/06/16 15:22, Martynas Jusevičius wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> Hey,
> >>>>>>>
> >>>>>>> would it be possible to adopt RDF/XML writer for quads (Dataset)?
> >> What
> >>>>>>> would that take?
> >>>>>>>
> >>>>>>> I know it would involve a non-standard syntax, but if we used
> >>>>>>> namespaced attributes, XML-compatible tools shouldn't break.
> >>>>>>>
> >>>>>>> I am thinking it should be possible to add an attribute (e.g.
> >>>>>>> rdfx:graph) with graph name on each of the property elements,
> >>>>>>> something like this:
> >>>>>>>
> >>>>>>>  >>>>>>> rdf:about="https://www.w3.org/People/Berners-Lee/card#i";>
> >>>>>>> >>>>>>>
> >>>>>>>
> >>>>>>> rdfx:graph="https://www.w3.org/People/Berners-Lee/card";>
> >> Tim
> >>>>>>> >>>>>>>
> >>>>>>>
> >>>>>>> rdfx:graph=http://data.semanticweb.org/person/tim-berners-lee/rdf
> >> "">Berners-Lee
> >>>>>>> 
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> And if a triple is in 2 graphs? The default graph?
> >>>>>>
> >>>>>>>
> >>>>>>> What do you think? Would someone else be interested in such
> >>>>>>> serialization? I know there is TriX, but it is not convenient for
> >> XSLT
> >>>>>>> transformation.
> >>>>>>>
> >>>>>>> Martynas
> >>>>>>> atomgraph.com
> >>>>>>>
> >>>>>>
> >>>>>> An alternative is more TriG like :
> >>>>>>
> >>>>>>
> >>>>>> 
> >>>>>>    RDF/XML here ...
> >>>>>>  
> >>>>
> >>>>
> >>
>
>


Re: Retrieving SPARQL-results xml using curl

2018-06-28 Thread Martynas Jusevičius
You're doing a POST request but supplying a URL query as if it was GET?

https://www.w3.org/TR/sparql11-protocol/#query-operation

On Thu, Jun 28, 2018 at 12:24 PM, Brice Sommacal 
wrote:

> Hello,
>
> I use the last distribution of Fuseki which is initialized with a TDB2
> dataset [1].
> Working with the user interface, everything os working well. I mean I am
> able to retrieve results from a select query and then download it.
>
> Going one step further, I would like to automate dataset update and dataset
> queries restitution.
> I am able to run SPARQL Update query within culr (using SPARQL INSERT
> inside a ttl file).
>
> I am also able to get SPARQL Result JSON objects using commands line like
> the following :
> curl
> http://localhost:3030/Metro/query?query=PREFIX+rdf%3A+%
> 3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22[..
> .]
>
> However,  if I specify the header to return XML result set, it keeps
> outputting the data in a JSON format.
>
> Here is my command : curl
> http://localhost:3030/Metro/query?query=PREFIX+rdf%3A+%
> 3Chttp%3A%2F%2Fwww.w3.org%2F1999%2F02%2F22[..
> .]   -X POST  -H 'Accept:+application/sparql-results+xml'
>
> I don't know what could be wrong. I am not an expert using curl... But
> syntax that I have seen on forums looks like the same...
>
> > Do you have an idea of what I am doing wrong?
>
>
> Regards,
>
>
> Brice
>
>
> [1] there is a huge progress between TDB1 and TDB2 (also fuseki 1 and 2).
> Thanks a lot to the community to make it available.
>


Creating AWriter for StreamRDF

2018-08-21 Thread Martynas Jusevičius
Hi,

I'm extending WriterStreamRDFPlain to implement a streaming CSV
parser. Its constructor [1] states:

Output tuples, using UTF8 output See
StreamRDFLib.writer(java.io.OutputStream) for ways to create a AWriter
object.

Then I'm looking at StreamRDFLib, but there are no methods that create
AWriter [2]. Only methods that create Stream RDF.

What am I missing?

[1] 
https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/riot/writer/WriterStreamRDFPlain.html#WriterStreamRDFPlain-org.apache.jena.atlas.io.AWriter-
[2] 
https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/riot/system/StreamRDFLib.html

Martynas


Re: Creating AWriter for StreamRDF

2018-08-21 Thread Martynas Jusevičius
Or should I use WriterStreamRDFFlat instead of WriterStreamRDFPlain?
Can't really understand the difference, except that they extend
different classes...

On Tue, Aug 21, 2018 at 2:11 PM, Martynas Jusevičius
 wrote:
> Hi,
>
> I'm extending WriterStreamRDFPlain to implement a streaming CSV
> parser. Its constructor [1] states:
>
> Output tuples, using UTF8 output See
> StreamRDFLib.writer(java.io.OutputStream) for ways to create a AWriter
> object.
>
> Then I'm looking at StreamRDFLib, but there are no methods that create
> AWriter [2]. Only methods that create Stream RDF.
>
> What am I missing?
>
> [1] 
> https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/riot/writer/WriterStreamRDFPlain.html#WriterStreamRDFPlain-org.apache.jena.atlas.io.AWriter-
> [2] 
> https://jena.apache.org/documentation/javadoc/arq/org/apache/jena/riot/system/StreamRDFLib.html
>
> Martynas


[3.0.1] Upgrading to 3.8.0

2018-09-17 Thread Martynas Jusevičius
Hi,

now that SPIN API has done that, we are finally forced to do (a long
overdue) upgrade from 3.0.1 to 3.8.0.

This brings up a few deprecated/removed methods that I would like to
ask some help with:
1. ReaderRIOT.setErrorHandler()
2. ReaderRIOT.setParserProfile()
3. ParserProfile.setBaseURI()
4. RDFDataMgr.createReader()
5. RiotLib.profile(String baseIRI, boolean resolveIRIs, boolean
checking, ErrorHandler handler)
6. org.apache.jena.sparql.engine.http.Service.queryAuthUser/queryAuthPwd

Could I get any pointers as to what these should be replaced with?

Thanks

Martynas
atomgraph.com


Re: Distro package

2018-10-03 Thread Martynas Jusevičius
I think Docker would be a more portable platform. Fuseki could have an
image based on this: https://hub.docker.com/r/stain/jena-fuseki/

I also think I know what the maintainers will answer: this is an
open-source project, so contributions are welcome ;)
On Wed, Oct 3, 2018 at 2:28 PM Laura Morales  wrote:
>
> Are there any plans to package jena/fuseki to Debian or other distros?


Re: Distro package

2018-10-03 Thread Martynas Jusevičius
What Docker addresses nicely is uniform deployment across different
platforms. And on top of that comes automatization, swarms etc.
On Wed, Oct 3, 2018 at 2:52 PM Laura Morales  wrote:
>
> To be honest, for as long as Fuseki is not packaged in any distribution, I'd 
> rather compile it myself than contribute to the trend of bloated app 
> virtualization...
>
>
>
> Sent: Wednesday, October 03, 2018 at 2:30 PM
> From: "Martynas Jusevičius" 
> To: jena-users-ml 
> Subject: Re: Distro package
> I think Docker would be a more portable platform. Fuseki could have an
> image based on this: https://hub.docker.com/r/stain/jena-fuseki/
>
> I also think I know what the maintainers will answer: this is an
> open-source project, so contributions are welcome ;)
> On Wed, Oct 3, 2018 at 2:28 PM Laura Morales  wrote:
> >
> > Are there any plans to package jena/fuseki to Debian or other distros?


Re: Distro package

2018-10-03 Thread Martynas Jusevičius
What about VOLUME for data persistence?
On Wed, Oct 3, 2018 at 3:51 PM Andy Seaborne  wrote:
>
> The cycle time on a *nix distribution is long, and has a long tail. That
> in turn can create costs on the project (so if people want step up ...)
>
> A Dockerfile can be part of the release process.
>
> A simple Dockerfile is possible - some WIP trying to be minimal:
>
> ---
> # Basic Fuseki Dockerfile.
> #
> # Assumes:
> # 1 - fuseki-server.jar
> # 2 - log4j.properties
>
> # Build: docker image build -t jena/fuseki .
> # Run:   docker run -it --rm   jena/fuseki --mem /ds
> #   Add "-d" to run in the background (no stdout)
>
> FROM openjdk:8
>
> LABEL maintainer="The Apache Jena community "
>
> EXPOSE 3030
>
> # Place choices of "fuseki-server.jar" and
> # "log4j.properties" in the current directory
>
> RUN   mkdir /apache-jena
> COPY  log4j.properties  /apache-jena
> COPY  fuseki-server.jar /apache-jena
>
> ## Run Fuseki command.
> ENTRYPOINT [ \
>"/usr/bin/java", "-jar", \
>"-Dlog4j.configuration=file:/apache-jena/log4j.properties",  \
>"/apache-jena/fuseki-server.jar"         \
>]
>
> ## Command line arguments are those for Fuseki.
> CMD []
> ---
>
>  Andy
>
> On 03/10/18 14:14, Martynas Jusevičius wrote:
> > What Docker addresses nicely is uniform deployment across different
> > platforms. And on top of that comes automatization, swarms etc.
> > On Wed, Oct 3, 2018 at 2:52 PM Laura Morales  wrote:
> >>
> >> To be honest, for as long as Fuseki is not packaged in any distribution, 
> >> I'd rather compile it myself than contribute to the trend of bloated app 
> >> virtualization...
> >>
> >>
> >>
> >> Sent: Wednesday, October 03, 2018 at 2:30 PM
> >> From: "Martynas Jusevičius" 
> >> To: jena-users-ml 
> >> Subject: Re: Distro package
> >> I think Docker would be a more portable platform. Fuseki could have an
> >> image based on this: https://hub.docker.com/r/stain/jena-fuseki/
> >>
> >> I also think I know what the maintainers will answer: this is an
> >> open-source project, so contributions are welcome ;)
> >> On Wed, Oct 3, 2018 at 2:28 PM Laura Morales  wrote:
> >>>
> >>> Are there any plans to package jena/fuseki to Debian or other distros?


Re: Distro package

2018-10-10 Thread Martynas Jusevičius
Andy,

I successfully built an image using your instructions.

However, I did not succeed in running it:

$ docker run --rm   jena/fuseki --mem //ds
[2018-10-10 11:03:09] Server INFO  Dataset: in-memory
[2018-10-10 11:03:09] Server ERROR Can't find resourceBase (tried
webapp, src/main/webapp, /./webapp and /./src/main/webapp)
[2018-10-10 11:03:09] Server ERROR Failed to start

The double dash in //ds is escaped because I'm running bash on
Windows. Also not sure why you had the -ti option there.

Is the current directory somehow off? It would probably need an
entrypoint script to be able to output or change it.

Is the Dockerfile on GitHub somewhere?
On Wed, Oct 3, 2018 at 4:00 PM Andy Seaborne  wrote:
>
>
>
> On 03/10/18 14:58, Martynas Jusevičius wrote:
> > What about VOLUME for data persistence?
>
> WIP!
> Send a suggested modification!
>
> > On Wed, Oct 3, 2018 at 3:51 PM Andy Seaborne  wrote:
> >>
> >> The cycle time on a *nix distribution is long, and has a long tail. That
> >> in turn can create costs on the project (so if people want step up ...)
> >>
> >> A Dockerfile can be part of the release process.
> >>
> >> A simple Dockerfile is possible - some WIP trying to be minimal:
> >>
> >> ---
> >> # Basic Fuseki Dockerfile.
> >> #
> >> # Assumes:
> >> # 1 - fuseki-server.jar
> >> # 2 - log4j.properties
> >>
> >> # Build: docker image build -t jena/fuseki .
> >> # Run:   docker run -it --rm   jena/fuseki --mem /ds
> >> #   Add "-d" to run in the background (no stdout)
> >>
> >> FROM openjdk:8
> >>
> >> LABEL maintainer="The Apache Jena community "
> >>
> >> EXPOSE 3030
> >>
> >> # Place choices of "fuseki-server.jar" and
> >> # "log4j.properties" in the current directory
> >>
> >> RUN   mkdir /apache-jena
> >> COPY  log4j.properties  /apache-jena
> >> COPY  fuseki-server.jar /apache-jena
> >>
> >> ## Run Fuseki command.
> >> ENTRYPOINT [ \
> >> "/usr/bin/java", "-jar", \
> >> "-Dlog4j.configuration=file:/apache-jena/log4j.properties",  \
> >> "/apache-jena/fuseki-server.jar" \
> >> ]
> >>
> >> ## Command line arguments are those for Fuseki.
> >> CMD []
> >> ---
> >>
> >>   Andy
> >>
> >> On 03/10/18 14:14, Martynas Jusevičius wrote:
> >>> What Docker addresses nicely is uniform deployment across different
> >>> platforms. And on top of that comes automatization, swarms etc.
> >>> On Wed, Oct 3, 2018 at 2:52 PM Laura Morales  wrote:
> >>>>
> >>>> To be honest, for as long as Fuseki is not packaged in any distribution, 
> >>>> I'd rather compile it myself than contribute to the trend of bloated app 
> >>>> virtualization...
> >>>>
> >>>>
> >>>>
> >>>> Sent: Wednesday, October 03, 2018 at 2:30 PM
> >>>> From: "Martynas Jusevičius" 
> >>>> To: jena-users-ml 
> >>>> Subject: Re: Distro package
> >>>> I think Docker would be a more portable platform. Fuseki could have an
> >>>> image based on this: https://hub.docker.com/r/stain/jena-fuseki/
> >>>>
> >>>> I also think I know what the maintainers will answer: this is an
> >>>> open-source project, so contributions are welcome ;)
> >>>> On Wed, Oct 3, 2018 at 2:28 PM Laura Morales  wrote:
> >>>>>
> >>>>> Are there any plans to package jena/fuseki to Debian or other distros?


Re: Inference

2018-10-30 Thread Martynas Jusevičius
Maybe this can help:
https://github.com/jfmunozf/Jena-Fuseki-Reasoner-Inference/wiki/Configuring-Apache-Jena-Fuseki-2.4.1-inference-and-reasoning-support-using-SPARQL-1.1:-Jena-inference-rules,-RDFS-Entailment-Regimes-and-OWL-reasoning
On Tue, Oct 30, 2018 at 8:22 AM Laura Morales  wrote:
>
> Is it enough to load the Schema triples along with my triples? I thought I 
> would need some of those "inference" or "reasoner" extensions.
>
>
>
> Sent: Tuesday, October 30, 2018 at 8:06 AM
> From: "Alex To" 
> To: users@jena.apache.org
> Subject: Re: Inference
> See SPARQL property path https://www.w3.org/TR/sparql11-query/#propertypaths
>
> Assuming you have the Schema vocab loaded in the same dataset (or graph),
> the following should work
>
> SELECT ?entity
> WHERE {
> ?entity rdf:type/rdfs:subClassOf* :CreativeWork
> }
>
>
> On Tue, Oct 30, 2018 at 5:20 PM Laura Morales  wrote:
>
> > Let's say I have a node of type schema:Book and one of type
> > schema:VideoGame. In the Schema vocabulary, both are subclasses of
> > schema:CreativeWork.
> > Can somebody please give me a hint how to query Fuseki for
> > schema:CreativeWork in order to retrieve both types?
> >
>
>
> --
>
> Alex To
>
> PhD Candidate
>
> School of Information Technologies
>
> Knowledge Discovery and Management Research Group
>
> Faculty of Engineering & IT
>
> THE UNIVERSITY OF SYDNEY | NSW | 2006
>
> Desk 4e69 | Building J12| 1 Cleveland Street
>
> M. +61423330656 <%2B61450061602>


Re: Inference

2018-10-30 Thread Martynas Jusevičius
I'm confused: you were asking about inference and my link is about inference.

Alex's suggestion is something else. But you could emulate inference
using CONSTRUCT to add the "inferred" properties.
On Tue, Oct 30, 2018 at 11:46 AM Laura Morales  wrote:
>
> Thanks for the link. It might not be the solution to my problem though. If I 
> understand correctly, inference/reasoning is to create new properties on top 
> of existing graphs, whereas Alex's solution (load all triples together) is 
> about using existing properties that already exist and are defined in some 
> vocabularies.
>
>
>
>
> Sent: Tuesday, October 30, 2018 at 10:13 AM
> From: "Martynas Jusevičius" 
> To: jena-users-ml 
> Subject: Re: Inference
> Maybe this can help:
> https://github.com/jfmunozf/Jena-Fuseki-Reasoner-Inference/wiki/Configuring-Apache-Jena-Fuseki-2.4.1-inference-and-reasoning-support-using-SPARQL-1.1:-Jena-inference-rules,-RDFS-Entailment-Regimes-and-OWL-reasoning
> On Tue, Oct 30, 2018 at 8:22 AM Laura Morales  wrote:
> >
> > Is it enough to load the Schema triples along with my triples? I thought I 
> > would need some of those "inference" or "reasoner" extensions.
> >
> >
> >
> > Sent: Tuesday, October 30, 2018 at 8:06 AM
> > From: "Alex To" 
> > To: users@jena.apache.org
> > Subject: Re: Inference
> > See SPARQL property path 
> > https://www.w3.org/TR/sparql11-query/#propertypaths[https://www.w3.org/TR/sparql11-query/#propertypaths]
> >
> > Assuming you have the Schema vocab loaded in the same dataset (or graph),
> > the following should work
> >
> > SELECT ?entity
> > WHERE {
> > ?entity rdf:type/rdfs:subClassOf* :CreativeWork
> > }
> >
> >
> > On Tue, Oct 30, 2018 at 5:20 PM Laura Morales  wrote:
> >
> > > Let's say I have a node of type schema:Book and one of type
> > > schema:VideoGame. In the Schema vocabulary, both are subclasses of
> > > schema:CreativeWork.
> > > Can somebody please give me a hint how to query Fuseki for
> > > schema:CreativeWork in order to retrieve both types?
> > >
> >
> >
> > --
> >
> > Alex To
> >
> > PhD Candidate
> >
> > School of Information Technologies
> >
> > Knowledge Discovery and Management Research Group
> >
> > Faculty of Engineering & IT
> >
> > THE UNIVERSITY OF SYDNEY | NSW | 2006
> >
> > Desk 4e69 | Building J12| 1 Cleveland Street
> >
> > M. +61423330656 <%2B61450061602>


Re: Distro package

2018-11-03 Thread Martynas Jusevičius
Andy,

can we have this WIP Dockerfile on GitHub somewhere? So we could make
pull requests and such.

Martynas
On Thu, Oct 11, 2018 at 2:06 PM Andy Seaborne  wrote:
>
> The Dockerfile (which is WIP) is for running the "main" (embedded
> server). My need is deployment as a triplestore service, no direct UI.
>
> You could add FUSEKI_HOME, FUSEKI_BASE setting or add including the
> "webapp" directory.
>
>  Andy
>
> On 10/10/18 12:12, Martynas Jusevičius wrote:
> > Andy,
> >
> > I successfully built an image using your instructions.
> >
> > However, I did not succeed in running it:
> >
> > $ docker run --rm   jena/fuseki --mem //ds
> > [2018-10-10 11:03:09] Server INFO  Dataset: in-memory
> > [2018-10-10 11:03:09] Server ERROR Can't find resourceBase (tried
> > webapp, src/main/webapp, /./webapp and /./src/main/webapp)
> > [2018-10-10 11:03:09] Server ERROR Failed to start
> >
> > The double dash in //ds is escaped because I'm running bash on
> > Windows. Also not sure why you had the -ti option there.
> >
> > Is the current directory somehow off? It would probably need an
> > entrypoint script to be able to output or change it.
> >
> > Is the Dockerfile on GitHub somewhere?
> > On Wed, Oct 3, 2018 at 4:00 PM Andy Seaborne  wrote:
> >>
> >>
> >>
> >> On 03/10/18 14:58, Martynas Jusevičius wrote:
> >>> What about VOLUME for data persistence?
> >>
> >> WIP!
> >> Send a suggested modification!
> >>
> >>> On Wed, Oct 3, 2018 at 3:51 PM Andy Seaborne  wrote:
> >>>>
> >>>> The cycle time on a *nix distribution is long, and has a long tail. That
> >>>> in turn can create costs on the project (so if people want step up ...)
> >>>>
> >>>> A Dockerfile can be part of the release process.
> >>>>
> >>>> A simple Dockerfile is possible - some WIP trying to be minimal:
> >>>>
> >>>> ---
> >>>> # Basic Fuseki Dockerfile.
> >>>> #
> >>>> # Assumes:
> >>>> # 1 - fuseki-server.jar
> >>>> # 2 - log4j.properties
> >>>>
> >>>> # Build: docker image build -t jena/fuseki .
> >>>> # Run:   docker run -it --rm   jena/fuseki --mem /ds
> >>>> #   Add "-d" to run in the background (no stdout)
> >>>>
> >>>> FROM openjdk:8
> >>>>
> >>>> LABEL maintainer="The Apache Jena community "
> >>>>
> >>>> EXPOSE 3030
> >>>>
> >>>> # Place choices of "fuseki-server.jar" and
> >>>> # "log4j.properties" in the current directory
> >>>>
> >>>> RUN   mkdir /apache-jena
> >>>> COPY  log4j.properties  /apache-jena
> >>>> COPY  fuseki-server.jar /apache-jena
> >>>>
> >>>> ## Run Fuseki command.
> >>>> ENTRYPOINT [ \
> >>>>  "/usr/bin/java", "-jar", \
> >>>>  "-Dlog4j.configuration=file:/apache-jena/log4j.properties",  \
> >>>>  "/apache-jena/fuseki-server.jar" \
> >>>>  ]
> >>>>
> >>>> ## Command line arguments are those for Fuseki.
> >>>> CMD []
> >>>> ---
> >>>>
> >>>>Andy
> >>>>
> >>>> On 03/10/18 14:14, Martynas Jusevičius wrote:
> >>>>> What Docker addresses nicely is uniform deployment across different
> >>>>> platforms. And on top of that comes automatization, swarms etc.
> >>>>> On Wed, Oct 3, 2018 at 2:52 PM Laura Morales  wrote:
> >>>>>>
> >>>>>> To be honest, for as long as Fuseki is not packaged in any 
> >>>>>> distribution, I'd rather compile it myself than contribute to the 
> >>>>>> trend of bloated app virtualization...
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> Sent: Wednesday, October 03, 2018 at 2:30 PM
> >>>>>> From: "Martynas Jusevičius" 
> >>>>>> To: jena-users-ml 
> >>>>>> Subject: Re: Distro package
> >>>>>> I think Docker would be a more portable platform. Fuseki could have an
> >>>>>> image based on this: https://hub.docker.com/r/stain/jena-fuseki/
> >>>>>>
> >>>>>> I also think I know what the maintainers will answer: this is an
> >>>>>> open-source project, so contributions are welcome ;)
> >>>>>> On Wed, Oct 3, 2018 at 2:28 PM Laura Morales  wrote:
> >>>>>>>
> >>>>>>> Are there any plans to package jena/fuseki to Debian or other distros?


Re: Loosely converting JSON/XML to RDF

2018-11-05 Thread Martynas Jusevičius
Transform XML to RDF/XML or TriX using XSLT: https://www.w3.org/TR/xslt20/

XSLT 3.0 can also transform JSON: https://www.w3.org/TR/xslt-30/#json
On Mon, Nov 5, 2018 at 8:34 AM Laura Morales  wrote:
>
> I have a mixed set of datasets in XML, JSON, and RDF formats. I would like to 
> convert all the XML/JSON ones to RDF such that I can only use one query 
> language/library to access all the data, instead of having three different 
> ones. I'm also not interested in using any particular ontology or vocabulary 
> for the conversion, so anything will work as long as I can make the 
> conversion.
> What would be an appropriate strategy for this? Since RDF requires absolute 
> IRIs, would it be a good idea for example to convert all properties to 
> http://example.org/property-name-1, http://example.org/property-name-2, ...? 
> And maybe use UUIDs for nodes? Or is there a better way of doing this?


Re: Loosely converting JSON/XML to RDF

2018-11-07 Thread Martynas Jusevičius
The same could be said about RDF/XML. It depends in what context (e.g.
ETL pipeline, client/server side) you want to use the data. That's the
beauty of RDF as an abstract model - you can choose the syntax that
best fits your use case, and convert between them if necessary,
without losing information.
On Wed, Nov 7, 2018 at 11:41 AM Laura Morales  wrote:
>
> This made me thinking... if I can convert CSV, XML, and other formats to 
> JSON, and then use JSON-LD context and framing to change the data to my 
> linking, why do tools such as RML, YARRRML, and SPARQL-Generate exist at all? 
> Do they do anything at all that can't be done with JSON-LD?
>
>
>
>
> Sent: Monday, November 05, 2018 at 9:10 AM
> From: "Christopher Johnson" 
> To: users@jena.apache.org
> Subject: Re: Loosely converting JSON/XML to RDF
> Another approach is to use JSON-LD. A JSON document can be "converted" to
> RDF by adding a context and using the toRDF method[1] in one of the JSON-LD
> libraries. Defining the context is similar to what is done with RML,
> basically mapping data objects to structured vocabulary terms. If your XML
> is sufficiently denormalized, you can also convert that to JSON and repeat
> the same process as above.
>
> Christopher Johnson
> Scientific Associate
> Universitätsbibliothek Leipzig
>
> [1] https://json-ld.org/spec/latest/json-ld-api/#object-to-rdf-conversion
>
> On Mon, 5 Nov 2018 at 08:55, Alex To  wrote:
>
> > We have web services returning XML and JSON in our environment. We use
> > https://github.com/RMLio/rmlmapper-java[https://github.com/RMLio/rmlmapper-java]
> >  to map XML/JSON to RDF with
> > satisfied results.
> >
> > Or course you need a valid URI for your XML or Json elements for e.g. in
> > our XML, if we have ... then we use RML to map
> > it to
> >
> > http://ourdomain.com/resources/students/[http://ourdomain.com/resources/students/]{id}
> >  rdfs:type
> > http://ourdomain.com/ont/Student[http://ourdomain.com/ont/Student]
> >
> > You can define your own URI generation scheme whatever works for you
> >
> > You can read more about RDF Mapping Language (RML) from W3C website.
> >
> > Regards
> >
> > On Mon, 5 Nov 2018 at 6:34 pm, Laura Morales  wrote:
> >
> > > I have a mixed set of datasets in XML, JSON, and RDF formats. I would
> > like
> > > to convert all the XML/JSON ones to RDF such that I can only use one
> > query
> > > language/library to access all the data, instead of having three
> > different
> > > ones. I'm also not interested in using any particular ontology or
> > > vocabulary for the conversion, so anything will work as long as I can
> > make
> > > the conversion.
> > > What would be an appropriate strategy for this? Since RDF requires
> > > absolute IRIs, would it be a good idea for example to convert all
> > > properties to 
> > > http://example.org/property-name-1[http://example.org/property-name-1],
> > > http://example.org/property-name-2[http://example.org/property-name-2], 
> > > ...? And maybe use UUIDs for nodes?
> > > Or is there a better way of doing this?
> > >
> >


Re: Loosely converting JSON/XML to RDF

2018-11-07 Thread Martynas Jusevičius
Just to add: if the choice comes down to, for example, XSLT vs.
SPARQL-Generate, then you are choosing between a standard with
multiple implementations, widely supported by the industry, vs. an
extension of a standard that is supported by a single research project
only. To me, that's a no-brainer.
On Wed, Nov 7, 2018 at 2:22 PM Conal Tuohy  wrote:
>
> Hi Laura
>
> If I recall correctly, not every JSON document can be parsed as JSON-LD
> merely by supplying a JSON-LD context. I think it is still the case that
> arrays of arrays are not valid in JSON-LD, so you may want to check your
> JSON data to ensure that it complies with that restriction.
>
> I think Martynas Jusevičius is absolutely right to say that different RDF
> syntaxes and RDF conversion techniques are better suited to different types
> of data sources. Personally, I like to use XSLT to convert both XML and
> JSON data to RDF/XML, because although XSLT generally allows use to use a
> simple templating style like SPARQL-Generate or CSVW, it also allows you to
> the flexibility to define more complex mapping operations where that's
> necessary (since XSLT is a Turing-complete programming language).
>
> Good luck with your conversions!
>
> Con
>
>
> On Wed, 7 Nov 2018 at 20:41, Laura Morales  wrote:
>
> > This made me thinking... if I can convert CSV, XML, and other formats to
> > JSON, and then use JSON-LD context and framing to change the data to my
> > linking, why do tools such as RML, YARRRML, and SPARQL-Generate exist at
> > all? Do they do anything at all that can't be done with JSON-LD?
> >
> >
> >
> >
> > Sent: Monday, November 05, 2018 at 9:10 AM
> > From: "Christopher Johnson" 
> > To: users@jena.apache.org
> > Subject: Re: Loosely converting JSON/XML to RDF
> > Another approach is to use JSON-LD. A JSON document can be "converted" to
> > RDF by adding a context and using the toRDF method[1] in one of the JSON-LD
> > libraries. Defining the context is similar to what is done with RML,
> > basically mapping data objects to structured vocabulary terms. If your XML
> > is sufficiently denormalized, you can also convert that to JSON and repeat
> > the same process as above.
> >
> > Christopher Johnson
> > Scientific Associate
> > Universitätsbibliothek Leipzig
> >
> > [1] https://json-ld.org/spec/latest/json-ld-api/#object-to-rdf-conversion
> >
> > On Mon, 5 Nov 2018 at 08:55, Alex To  wrote:
> >
> > > We have web services returning XML and JSON in our environment. We use
> > >
> > https://github.com/RMLio/rmlmapper-java[https://github.com/RMLio/rmlmapper-java]
> > to map XML/JSON to RDF with
> > > satisfied results.
> > >
> > > Or course you need a valid URI for your XML or Json elements for e.g. in
> > > our XML, if we have ... then we use RML to
> > map
> > > it to
> > >
> > >
> > http://ourdomain.com/resources/students/[http://ourdomain.com/resources/students/]{id}
> > rdfs:type
> > > http://ourdomain.com/ont/Student[http://ourdomain.com/ont/Student]
> > >
> > > You can define your own URI generation scheme whatever works for you
> > >
> > > You can read more about RDF Mapping Language (RML) from W3C website.
> > >
> > > Regards
> > >
> > > On Mon, 5 Nov 2018 at 6:34 pm, Laura Morales  wrote:
> > >
> > > > I have a mixed set of datasets in XML, JSON, and RDF formats. I would
> > > like
> > > > to convert all the XML/JSON ones to RDF such that I can only use one
> > > query
> > > > language/library to access all the data, instead of having three
> > > different
> > > > ones. I'm also not interested in using any particular ontology or
> > > > vocabulary for the conversion, so anything will work as long as I can
> > > make
> > > > the conversion.
> > > > What would be an appropriate strategy for this? Since RDF requires
> > > > absolute IRIs, would it be a good idea for example to convert all
> > > > properties to
> > http://example.org/property-name-1[http://example.org/property-name-1],
> > > > http://example.org/property-name-2[http://example.org/property-name-2],
> > ...? And maybe use UUIDs for nodes?
> > > > Or is there a better way of doing this?
> > > >
> > >
> >
>
>
> --
> Conal Tuohy
> http://conaltuohy.com/
> @conal_tuohy
> +61-466-324297


Converting TriX to N-Quads with riot

2018-11-10 Thread Martynas Jusevičius
Hi,

I have some large TriX files that I want to convert to N-Quads from
command line (and later on to RDF HDT).

The files validate against the TriX XML schema, so I assume they're good.
I couldn't find a standalone trix script in the /bin folder, so I tried

 riot --syntax=TriX KORT10.1.xml

and I got:

13:25:02 ERROR riot :: [line: 15, col: 99] {E202}
Expecting XML start or end element(s). String data
"https://localhost:4443/atomgraph/city-graph/graphs/616337ee-f0f9-455f-a838-037fa875dbdd";
not allowed. Maybe there should be an rdf:parseType='Literal' for
embedding mixed XML content in RDF. Maybe a striping error.
13:25:02 ERROR riot :: [line: 16, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 21, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 26, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 31, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 36, col: 15] {E201}
Multiple children of property element
13:25:02 WARN  riot :: [line: 39, col: 77] {W102}
unqualified use of rdf:datatype is deprecated.
13:25:02 ERROR riot :: [line: 41, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 46, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 51, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 56, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 61, col: 15] {E201}
Multiple children of property element
13:25:02 ERROR riot :: [line: 66, col: 15] {E201}
Multiple children of property element
13:25:02 WARN  riot :: [line: 69, col: 77] {W102}
unqualified use of rdf:datatype is deprecated.

which looks like riot is trying to parse the file as RDF/XML instead.

Is TriX not supported on command line?


Martynas


Re: Converting TriX to N-Quads with riot

2018-11-10 Thread Martynas Jusevičius
Laura's example seems to work, but I thought --syntax should override
the file extension?
On Sat, Nov 10, 2018 at 3:17 PM ajs6f  wrote:
>
> I'd have to go check the code, but Martynas, possibly the file extensions are 
> tripping you up?
>
> ajs6f
>
> > On Nov 10, 2018, at 9:10 AM, Laura Morales  wrote:
> >
> > Using the TriX file here 
> > http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdf on page 3 as an 
> > example it works for me:
> >
> > 1. copy content to a file example.trix
> > 2. riot --validate example.trix
> > 3. riot --output nq example.trix
> >
> >
> >
> >
> >> Sent: Saturday, November 10, 2018 at 2:31 PM
> >> From: "Martynas Jusevičius" 
> >> To: jena-users-ml 
> >> Subject: Converting TriX to N-Quads with riot
> >>
> >> Hi,
> >>
> >> I have some large TriX files that I want to convert to N-Quads from
> >> command line (and later on to RDF HDT).
> >>
> >> The files validate against the TriX XML schema, so I assume they're good.
> >> I couldn't find a standalone trix script in the /bin folder, so I tried
> >>
> >> riot --syntax=TriX KORT10.1.xml
> >>
> >> and I got:
> >>
> >> 13:25:02 ERROR riot :: [line: 15, col: 99] {E202}
> >> Expecting XML start or end element(s). String data
> >> "https://localhost:4443/atomgraph/city-graph/graphs/616337ee-f0f9-455f-a838-037fa875dbdd";
> >> not allowed. Maybe there should be an rdf:parseType='Literal' for
> >> embedding mixed XML content in RDF. Maybe a striping error.
> >> 13:25:02 ERROR riot :: [line: 16, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 21, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 26, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 31, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 36, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 WARN  riot :: [line: 39, col: 77] {W102}
> >> unqualified use of rdf:datatype is deprecated.
> >> 13:25:02 ERROR riot :: [line: 41, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 46, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 51, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 56, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 61, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 ERROR riot :: [line: 66, col: 15] {E201}
> >> Multiple children of property element
> >> 13:25:02 WARN  riot :: [line: 69, col: 77] {W102}
> >> unqualified use of rdf:datatype is deprecated.
> >>
> >> which looks like riot is trying to parse the file as RDF/XML instead.
> >>
> >> Is TriX not supported on command line?
> >>
> >>
> >> Martynas
> >>
>


Re: Converting TriX to N-Quads with riot

2018-11-10 Thread Martynas Jusevičius
There are some issues with non-canonical XMLLiterals, but in principle
the files parse.

I think --syntax TriX doesn't work though, as it throws RDF/XML related errors.
On Sat, Nov 10, 2018 at 3:32 PM Laura Morales  wrote:
>
> It should yes. Maybe you have some errors in your file? Did you try riot 
> --validate yourfile.xml? Maybe you missed the 
> xmlns="http://www.w3.org/2004/03/trix/trix-1/";?
>
>
>
> Sent: Saturday, November 10, 2018 at 3:19 PM
> From: "Martynas Jusevičius" 
> To: jena-users-ml 
> Subject: Re: Converting TriX to N-Quads with riot
> Laura's example seems to work, but I thought --syntax should override
> the file extension?
> On Sat, Nov 10, 2018 at 3:17 PM ajs6f  wrote:
> >
> > I'd have to go check the code, but Martynas, possibly the file extensions 
> > are tripping you up?
> >
> > ajs6f
> >
> > > On Nov 10, 2018, at 9:10 AM, Laura Morales  wrote:
> > >
> > > Using the TriX file here 
> > > http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdf on page 3 as an 
> > > example it works for me:
> > >
> > > 1. copy content to a file example.trix
> > > 2. riot --validate example.trix
> > > 3. riot --output nq example.trix
> > >
> > >
> > >
> > >
> > >> Sent: Saturday, November 10, 2018 at 2:31 PM
> > >> From: "Martynas Jusevičius" 
> > >> To: jena-users-ml 
> > >> Subject: Converting TriX to N-Quads with riot
> > >>
> > >> Hi,
> > >>
> > >> I have some large TriX files that I want to convert to N-Quads from
> > >> command line (and later on to RDF HDT).
> > >>
> > >> The files validate against the TriX XML schema, so I assume they're good.
> > >> I couldn't find a standalone trix script in the /bin folder, so I tried
> > >>
> > >> riot --syntax=TriX KORT10.1.xml
> > >>
> > >> and I got:
> > >>
> > >> 13:25:02 ERROR riot :: [line: 15, col: 99] {E202}
> > >> Expecting XML start or end element(s). String data
> > >> "https://localhost:4443/atomgraph/city-graph/graphs/616337ee-f0f9-455f-a838-037fa875dbdd";
> > >> not allowed. Maybe there should be an rdf:parseType='Literal' for
> > >> embedding mixed XML content in RDF. Maybe a striping error.
> > >> 13:25:02 ERROR riot :: [line: 16, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 21, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 26, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 31, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 36, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 WARN riot :: [line: 39, col: 77] {W102}
> > >> unqualified use of rdf:datatype is deprecated.
> > >> 13:25:02 ERROR riot :: [line: 41, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 46, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 51, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 56, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 61, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 ERROR riot :: [line: 66, col: 15] {E201}
> > >> Multiple children of property element
> > >> 13:25:02 WARN riot :: [line: 69, col: 77] {W102}
> > >> unqualified use of rdf:datatype is deprecated.
> > >>
> > >> which looks like riot is trying to parse the file as RDF/XML instead.
> > >>
> > >> Is TriX not supported on command line?
> > >>
> > >>
> > >> Martynas
> > >>
> >


Re: Converting TriX to N-Quads with riot

2018-11-10 Thread Martynas Jusevičius
Yes, .as I wrote, without --syntax and with .trix extension it works.
On Sat, Nov 10, 2018 at 3:51 PM ajs6f  wrote:
>
> Just as an experiment, can you verify that changing the extensions to .trix 
> and removing the --syntax flag works?
>
> ajs6f
>
> > On Nov 10, 2018, at 9:49 AM, Martynas Jusevičius  
> > wrote:
> >
> > There are some issues with non-canonical XMLLiterals, but in principle
> > the files parse.
> >
> > I think --syntax TriX doesn't work though, as it throws RDF/XML related 
> > errors.
> > On Sat, Nov 10, 2018 at 3:32 PM Laura Morales  wrote:
> >>
> >> It should yes. Maybe you have some errors in your file? Did you try riot 
> >> --validate yourfile.xml? Maybe you missed the 
> >> xmlns="http://www.w3.org/2004/03/trix/trix-1/";?
> >>
> >>
> >>
> >> Sent: Saturday, November 10, 2018 at 3:19 PM
> >> From: "Martynas Jusevičius" 
> >> To: jena-users-ml 
> >> Subject: Re: Converting TriX to N-Quads with riot
> >> Laura's example seems to work, but I thought --syntax should override
> >> the file extension?
> >> On Sat, Nov 10, 2018 at 3:17 PM ajs6f  wrote:
> >>>
> >>> I'd have to go check the code, but Martynas, possibly the file extensions 
> >>> are tripping you up?
> >>>
> >>> ajs6f
> >>>
> >>>> On Nov 10, 2018, at 9:10 AM, Laura Morales  wrote:
> >>>>
> >>>> Using the TriX file here 
> >>>> http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdf on page 3 as an 
> >>>> example it works for me:
> >>>>
> >>>> 1. copy content to a file example.trix
> >>>> 2. riot --validate example.trix
> >>>> 3. riot --output nq example.trix
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>> Sent: Saturday, November 10, 2018 at 2:31 PM
> >>>>> From: "Martynas Jusevičius" 
> >>>>> To: jena-users-ml 
> >>>>> Subject: Converting TriX to N-Quads with riot
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> I have some large TriX files that I want to convert to N-Quads from
> >>>>> command line (and later on to RDF HDT).
> >>>>>
> >>>>> The files validate against the TriX XML schema, so I assume they're 
> >>>>> good.
> >>>>> I couldn't find a standalone trix script in the /bin folder, so I tried
> >>>>>
> >>>>> riot --syntax=TriX KORT10.1.xml
> >>>>>
> >>>>> and I got:
> >>>>>
> >>>>> 13:25:02 ERROR riot :: [line: 15, col: 99] {E202}
> >>>>> Expecting XML start or end element(s). String data
> >>>>> "https://localhost:4443/atomgraph/city-graph/graphs/616337ee-f0f9-455f-a838-037fa875dbdd";
> >>>>> not allowed. Maybe there should be an rdf:parseType='Literal' for
> >>>>> embedding mixed XML content in RDF. Maybe a striping error.
> >>>>> 13:25:02 ERROR riot :: [line: 16, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 21, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 26, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 31, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 36, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 WARN riot :: [line: 39, col: 77] {W102}
> >>>>> unqualified use of rdf:datatype is deprecated.
> >>>>> 13:25:02 ERROR riot :: [line: 41, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 46, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 51, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 56, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 61, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 ERROR riot :: [line: 66, col: 15] {E201}
> >>>>> Multiple children of property element
> >>>>> 13:25:02 WARN riot :: [line: 69, col: 77] {W102}
> >>>>> unqualified use of rdf:datatype is deprecated.
> >>>>>
> >>>>> which looks like riot is trying to parse the file as RDF/XML instead.
> >>>>>
> >>>>> Is TriX not supported on command line?
> >>>>>
> >>>>>
> >>>>> Martynas
> >>>>>
> >>>
>


Re: Converting TriX to N-Quads with riot

2018-11-10 Thread Martynas Jusevičius
Laura,

I verified that my file is not the issue. You can take the same
example that you found in the HP paper and try:

pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/linkeddatahub-apps/apps/atomgraph/grunddata/data/trix$
riot --syntax=TriX test.xml
15:03:52 ERROR riot :: [line: 3, col: 32] {E202}
Expecting XML start or end element(s). String data
"http://example.org/graph1"; not allowed. Maybe there should be an
rdf:parseType='Literal' for embedding mixed XML content in RDF. Maybe
a striping error.
15:03:52 ERROR riot :: [line: 4, col: 10] {E201}
Multiple children of property element
15:03:52 ERROR riot :: [line: 9, col: 10] {E201}
Multiple children of property element
15:03:52 ERROR riot :: [line: 14, col: 10] {E201}
Multiple children of property element
15:03:52 WARN  riot :: [line: 18, col: 53] {W102}
unqualified use of rdf:datatype is deprecated.
_:Bbe5af522X2D52c0X2D4879X2Da235X2Da6b9ac7f14ee
<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
<http://www.w3.org/2004/03/trix/trix-1/TriX> .
_:B79b951ecX2Dc4efX2D4f28X2D850cX2D5b34b530f5c3
<http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
<http://www.w3.org/2004/03/trix/trix-1/uri> .
_:Bbe5af522X2D52c0X2D4879X2Da235X2Da6b9ac7f14ee
<http://www.w3.org/2004/03/trix/trix-1/graph>
_:B79b951ecX2Dc4efX2D4f28X2D850cX2D5b34b530f5c3 .
On Sat, Nov 10, 2018 at 3:54 PM Laura Morales  wrote:
>
> Also maybe share some of the file?
>
>
>
> Sent: Saturday, November 10, 2018 at 3:51 PM
> From: ajs6f 
> To: users@jena.apache.org
> Subject: Re: Converting TriX to N-Quads with riot
> Just as an experiment, can you verify that changing the extensions to .trix 
> and removing the --syntax flag works?
>
> ajs6f
>
> > On Nov 10, 2018, at 9:49 AM, Martynas Jusevičius  
> > wrote:
> >
> > There are some issues with non-canonical XMLLiterals, but in principle
> > the files parse.
> >
> > I think --syntax TriX doesn't work though, as it throws RDF/XML related 
> > errors.
> > On Sat, Nov 10, 2018 at 3:32 PM Laura Morales  wrote:
> >>
> >> It should yes. Maybe you have some errors in your file? Did you try riot 
> >> --validate yourfile.xml? Maybe you missed the 
> >> xmlns="http://www.w3.org/2004/03/trix/trix-1/";?
> >>
> >>
> >>
> >> Sent: Saturday, November 10, 2018 at 3:19 PM
> >> From: "Martynas Jusevičius" 
> >> To: jena-users-ml 
> >> Subject: Re: Converting TriX to N-Quads with riot
> >> Laura's example seems to work, but I thought --syntax should override
> >> the file extension?
> >> On Sat, Nov 10, 2018 at 3:17 PM ajs6f  wrote:
> >>>
> >>> I'd have to go check the code, but Martynas, possibly the file extensions 
> >>> are tripping you up?
> >>>
> >>> ajs6f
> >>>
> >>>> On Nov 10, 2018, at 9:10 AM, Laura Morales  wrote:
> >>>>
> >>>> Using the TriX file here 
> >>>> http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdf on page 3 as an 
> >>>> example it works for me:
> >>>>
> >>>> 1. copy content to a file example.trix
> >>>> 2. riot --validate example.trix
> >>>> 3. riot --output nq example.trix
> >>>>
> >>>>
> >>>>
> >>>>
> >>>>> Sent: Saturday, November 10, 2018 at 2:31 PM
> >>>>> From: "Martynas Jusevičius" 
> >>>>> To: jena-users-ml 
> >>>>> Subject: Converting TriX to N-Quads with riot
> >>>>>
> >>>>> Hi,
> >>>>>
> >>>>> I have some large TriX files that I want to convert to N-Quads from
> >>>>> command line (and later on to RDF HDT).
> >>>>>
> >>>>> The files validate against the TriX XML schema, so I assume they're 
> >>>>> good.
> >>>>> I couldn't find a standalone trix script in the /bin folder, so I tried
> >>>>>
> >>>>> riot --syntax=TriX KORT10.1.xml
> >>>>>
> >>>>> and I got:
> >>>>>
> >>>>> 13:25:02 ERROR riot :: [line: 15, col: 99] {E202}
> >>>>> Expecting XML start or end element(s). String data
> >>>>> "https://localhost:4443/atomgraph/city-graph/graphs/616337ee-f0f9-455f-a838-037fa875dbdd";
> >>>>> not allowed. Maybe there should be an rdf:parseType='Literal' for
&

Re: Converting TriX to N-Quads with riot

2018-11-10 Thread Martynas Jusevičius
I'm using 3.6.0. I'll try with a newer version.
On Sat, Nov 10, 2018 at 6:15 PM Andy Seaborne  wrote:
>
> Works for me.
>
> $ riot --syntax=TriX  t.xml
> <http://example.org/s> <http://example.org/p> <http://example.org/o> .
>
> v3.9.0
>
> (this is reminiscent of an old bug)
>
>  Andy
>
> t.xml:
> 
> 
>
>  
>http://example.org/s
>http://example.org/p
>http://example.org/o
>  
>
> 
>
>
> On 10/11/2018 15:16, ajs6f wrote:
> > Please file a ticket for this, with a file that shows the problem and a 
> > complete invocation that fails.
> >
> > ajs6f
> >
> >> On Nov 10, 2018, at 10:04 AM, Martynas Jusevičius  
> >> wrote:
> >>
> >> Yes, .as I wrote, without --syntax and with .trix extension it works.
> >> On Sat, Nov 10, 2018 at 3:51 PM ajs6f  wrote:
> >>>
> >>> Just as an experiment, can you verify that changing the extensions to 
> >>> .trix and removing the --syntax flag works?
> >>>
> >>> ajs6f
> >>>
> >>>> On Nov 10, 2018, at 9:49 AM, Martynas Jusevičius 
> >>>>  wrote:
> >>>>
> >>>> There are some issues with non-canonical XMLLiterals, but in principle
> >>>> the files parse.
> >>>>
> >>>> I think --syntax TriX doesn't work though, as it throws RDF/XML related 
> >>>> errors.
> >>>> On Sat, Nov 10, 2018 at 3:32 PM Laura Morales  wrote:
> >>>>>
> >>>>> It should yes. Maybe you have some errors in your file? Did you try 
> >>>>> riot --validate yourfile.xml? Maybe you missed the 
> >>>>> xmlns="http://www.w3.org/2004/03/trix/trix-1/";?
> >>>>>
> >>>>>
> >>>>>
> >>>>> Sent: Saturday, November 10, 2018 at 3:19 PM
> >>>>> From: "Martynas Jusevičius" 
> >>>>> To: jena-users-ml 
> >>>>> Subject: Re: Converting TriX to N-Quads with riot
> >>>>> Laura's example seems to work, but I thought --syntax should override
> >>>>> the file extension?
> >>>>> On Sat, Nov 10, 2018 at 3:17 PM ajs6f  wrote:
> >>>>>>
> >>>>>> I'd have to go check the code, but Martynas, possibly the file 
> >>>>>> extensions are tripping you up?
> >>>>>>
> >>>>>> ajs6f
> >>>>>>
> >>>>>>> On Nov 10, 2018, at 9:10 AM, Laura Morales  wrote:
> >>>>>>>
> >>>>>>> Using the TriX file here 
> >>>>>>> http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdf on page 3 as 
> >>>>>>> an example it works for me:
> >>>>>>>
> >>>>>>> 1. copy content to a file example.trix
> >>>>>>> 2. riot --validate example.trix
> >>>>>>> 3. riot --output nq example.trix
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>>> Sent: Saturday, November 10, 2018 at 2:31 PM
> >>>>>>>> From: "Martynas Jusevičius" 
> >>>>>>>> To: jena-users-ml 
> >>>>>>>> Subject: Converting TriX to N-Quads with riot
> >>>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> I have some large TriX files that I want to convert to N-Quads from
> >>>>>>>> command line (and later on to RDF HDT).
> >>>>>>>>
> >>>>>>>> The files validate against the TriX XML schema, so I assume they're 
> >>>>>>>> good.
> >>>>>>>> I couldn't find a standalone trix script in the /bin folder, so I 
> >>>>>>>> tried
> >>>>>>>>
> >>>>>>>> riot --syntax=TriX KORT10.1.xml
> >>>>>>>>
> >>>>>>>> and I got:
> >>>>>>>>
> >>>>>>>> 13:25:02 ERROR riot :: [line: 15, col: 99] {E202}
> >>>>>>>> Expecting XML start or end element(s). String data
> >>>>>>>> "https://localhost:4443/ato

Re: Converting TriX to N-Quads with riot

2018-11-11 Thread Martynas Jusevičius
I'm on Jena 3.9.0 now.

While we're at it, can you explain how to convert TriX, or at least
N-Quads, to N-Triples by stripping named graphs? I tried

riot --nocheck --out=nt KORT10.1.nq
riot --nocheck --out=nt KORT10.1.trix

Neither produced any output.

The reason is that my end goal is HDT, which currently does not support graphs.
On Sat, Nov 10, 2018 at 6:22 PM Andy Seaborne  wrote:
>
>
>
> On 10/11/2018 17:15, Andy Seaborne wrote:
> > Works for me.
> >
> > $ riot --syntax=TriX  t.xml
> > <http://example.org/s> <http://example.org/p> <http://example.org/o> .
> >
> > v3.9.0
> >
> > (this is reminiscent of an old bug)
>
> JENA-1535
> Bug in 3.4 to 3.7
> Fixed in 3.8.0
>
>
>
> >
> >  Andy
> >
> > t.xml:
> > 
> > 
> >
> >  
> >http://example.org/s
> >http://example.org/p
> >http://example.org/o
> >  
> >
> > 
> >
> >
> > On 10/11/2018 15:16, ajs6f wrote:
> >> Please file a ticket for this, with a file that shows the problem and
> >> a complete invocation that fails.
> >>
> >> ajs6f
> >>
> >>> On Nov 10, 2018, at 10:04 AM, Martynas Jusevičius
> >>>  wrote:
> >>>
> >>> Yes, .as I wrote, without --syntax and with .trix extension it works.
> >>> On Sat, Nov 10, 2018 at 3:51 PM ajs6f  wrote:
> >>>>
> >>>> Just as an experiment, can you verify that changing the extensions
> >>>> to .trix and removing the --syntax flag works?
> >>>>
> >>>> ajs6f
> >>>>
> >>>>> On Nov 10, 2018, at 9:49 AM, Martynas Jusevičius
> >>>>>  wrote:
> >>>>>
> >>>>> There are some issues with non-canonical XMLLiterals, but in principle
> >>>>> the files parse.
> >>>>>
> >>>>> I think --syntax TriX doesn't work though, as it throws RDF/XML
> >>>>> related errors.
> >>>>> On Sat, Nov 10, 2018 at 3:32 PM Laura Morales 
> >>>>> wrote:
> >>>>>>
> >>>>>> It should yes. Maybe you have some errors in your file? Did you
> >>>>>> try riot --validate yourfile.xml? Maybe you missed the
> >>>>>> xmlns="http://www.w3.org/2004/03/trix/trix-1/";?
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> Sent: Saturday, November 10, 2018 at 3:19 PM
> >>>>>> From: "Martynas Jusevičius" 
> >>>>>> To: jena-users-ml 
> >>>>>> Subject: Re: Converting TriX to N-Quads with riot
> >>>>>> Laura's example seems to work, but I thought --syntax should override
> >>>>>> the file extension?
> >>>>>> On Sat, Nov 10, 2018 at 3:17 PM ajs6f  wrote:
> >>>>>>>
> >>>>>>> I'd have to go check the code, but Martynas, possibly the file
> >>>>>>> extensions are tripping you up?
> >>>>>>>
> >>>>>>> ajs6f
> >>>>>>>
> >>>>>>>> On Nov 10, 2018, at 9:10 AM, Laura Morales 
> >>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>> Using the TriX file here
> >>>>>>>> http://www.hpl.hp.com/techreports/2004/HPL-2004-56.pdf on page 3
> >>>>>>>> as an example it works for me:
> >>>>>>>>
> >>>>>>>> 1. copy content to a file example.trix
> >>>>>>>> 2. riot --validate example.trix
> >>>>>>>> 3. riot --output nq example.trix
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>>> Sent: Saturday, November 10, 2018 at 2:31 PM
> >>>>>>>>> From: "Martynas Jusevičius" 
> >>>>>>>>> To: jena-users-ml 
> >>>>>>>>> Subject: Converting TriX to N-Quads with riot
> >>>>>>>>>
> >>>>>>>>> Hi,
> >>>>>>>>>
> >>>>>>>>> I have some large TriX files that I want to convert to N-Quads
> >>>>>>>>&

Re: Converting TriX to N-Quads with riot

2018-11-11 Thread Martynas Jusevičius
I got a tip that rapper should be able to convert N-Quads to N-Triples.
http://librdf.org/raptor/rapper.html

Would be nice if Jena had this feature too.
On Sun, Nov 11, 2018 at 11:11 AM Laura Morales  wrote:
>
> I think the only option is to use cli tools such as sed to strip the graph 
> name from nquads, so basically you convert to nt then you strip out graph 
> names with sed. I also had the same problem and asked for some kind of 
> --ignore-graph flag in riot and other jena tools, but IIRC the answer was to 
> use sed or some other cli tools. It's rather cumbersome but that's the way it 
> is, unless somebody implements it.
>
>
>
>
> Sent: Sunday, November 11, 2018 at 11:00 AM
> From: "Martynas Jusevičius" 
> To: jena-users-ml 
> Subject: Re: Converting TriX to N-Quads with riot
> I'm on Jena 3.9.0 now.
>
> While we're at it, can you explain how to convert TriX, or at least
> N-Quads, to N-Triples by stripping named graphs? I tried
>
> riot --nocheck --out=nt KORT10.1.nq
> riot --nocheck --out=nt KORT10.1.trix
>
> Neither produced any output.
>
> The reason is that my end goal is HDT, which currently does not support 
> graphs.
> On Sat, Nov 10, 2018 at 6:22 PM Andy Seaborne  wrote:
> >
> >
> >
> > On 10/11/2018 17:15, Andy Seaborne wrote:
> > > Works for me.
> > >
> > > $ riot --syntax=TriX t.xml
> > > <http://example.org/s> <http://example.org/p[http://example.org/p]> 
> > > <http://example.org/o[http://example.org/o]> .
> > >
> > > v3.9.0
> > >
> > > (this is reminiscent of an old bug)
> >
> > JENA-1535
> > Bug in 3.4 to 3.7
> > Fixed in 3.8.0
> >
> >
> >
> > >
> > > Andy
> > >
> > > t.xml:
> > > 
> > > 
> > > 
> > > 
> > > http://example.org/s[http://example.org/s]
> > > http://example.org/p[http://example.org/p]
> > > http://example.org/o[http://example.org/o]
> > > 
> > > 
> > > 
> > >
> > >
> > > On 10/11/2018 15:16, ajs6f wrote:
> > >> Please file a ticket for this, with a file that shows the problem and
> > >> a complete invocation that fails.
> > >>
> > >> ajs6f
> > >>
> > >>> On Nov 10, 2018, at 10:04 AM, Martynas Jusevičius
> > >>>  wrote:
> > >>>
> > >>> Yes, .as I wrote, without --syntax and with .trix extension it works.
> > >>> On Sat, Nov 10, 2018 at 3:51 PM ajs6f  wrote:
> > >>>>
> > >>>> Just as an experiment, can you verify that changing the extensions
> > >>>> to .trix and removing the --syntax flag works?
> > >>>>
> > >>>> ajs6f
> > >>>>
> > >>>>> On Nov 10, 2018, at 9:49 AM, Martynas Jusevičius
> > >>>>>  wrote:
> > >>>>>
> > >>>>> There are some issues with non-canonical XMLLiterals, but in principle
> > >>>>> the files parse.
> > >>>>>
> > >>>>> I think --syntax TriX doesn't work though, as it throws RDF/XML
> > >>>>> related errors.
> > >>>>> On Sat, Nov 10, 2018 at 3:32 PM Laura Morales 
> > >>>>> wrote:
> > >>>>>>
> > >>>>>> It should yes. Maybe you have some errors in your file? Did you
> > >>>>>> try riot --validate yourfile.xml? Maybe you missed the
> > >>>>>> xmlns="http://www.w3.org/2004/03/trix/trix-1/";?
> > >>>>>>
> > >>>>>>
> > >>>>>>
> > >>>>>> Sent: Saturday, November 10, 2018 at 3:19 PM
> > >>>>>> From: "Martynas Jusevičius" 
> > >>>>>> To: jena-users-ml 
> > >>>>>> Subject: Re: Converting TriX to N-Quads with riot
> > >>>>>> Laura's example seems to work, but I thought --syntax should override
> > >>>>>> the file extension?
> > >>>>>> On Sat, Nov 10, 2018 at 3:17 PM ajs6f  wrote:
> > >>>>>>>
> > >>>>>>> I'd have to go check the code, but Martynas, possibly the file
> > >>>>>>> extensions are tripping you up?
> > >>>>>>>
> > >>>>>>> ajs6f
> > >>>>>

Re: Converting TriX to N-Quads with riot

2018-11-11 Thread Martynas Jusevičius
Nice to know, thanks!

We should put together a Docker image with all these utilities ;)
On Sun, 11 Nov 2018 at 16.02, Adrian Gschwend  wrote:

> On 11.11.18 18:36, Martynas Jusevičius wrote:
>
> > I got a tip that rapper should be able to convert N-Quads to N-Triples.
> > http://librdf.org/raptor/rapper.html
>
> I did use rapper too for that but for large datasets it was too slow. I
> use "serd" now for batch-conversion, this is by far the fastest lib I
> found. It converts nquads to ntriples as well:
>
> https://drobilla.net/software/serd
>
> I once opened an issue for what you are looking for:
>
> https://github.com/drobilla/serd/issues/12
>
> currently, I do it like this:
>
> #!/usr/bin/env bash
> set -euo pipefail
>
> tdbdump --loc target/tdb | sed '\#example.org#d' | serdi -o ntriples - |
> gzip --stdout > target/everything.nt.gz
>
> In this example I strip out a certain grap, if you don't need that just
> skip this part. serdi throws away the graph in nquads
>
>
> regards
>
> Adrian
>


Re: Converting TriX to N-Quads with riot

2018-11-11 Thread Martynas Jusevičius
Why have to install them manually (and in a system-specific way) if they
can come pre-packaged in a single image?

Laura you should use Docker more, you’ll get the hang of it ;)
On Sun, 11 Nov 2018 at 16.45, Laura Morales  wrote:

> > We should put together a Docker image with all these utilities
>
> or... you know... just make a list (or a graph).
>
>
>
>
> Sent: Sunday, November 11, 2018 at 4:38 PM
> From: "Martynas Jusevičius" 
> To: users@jena.apache.org
> Subject: Re: Converting TriX to N-Quads with riot
> Nice to know, thanks!
>
> We should put together a Docker image with all these utilities ;)
> On Sun, 11 Nov 2018 at 16.02, Adrian Gschwend  wrote:
>
> > On 11.11.18 18:36, Martynas Jusevičius wrote:
> >
> > > I got a tip that rapper should be able to convert N-Quads to N-Triples.
> > > http://librdf.org/raptor/rapper.html
> >
> > I did use rapper too for that but for large datasets it was too slow. I
> > use "serd" now for batch-conversion, this is by far the fastest lib I
> > found. It converts nquads to ntriples as well:
> >
> > https://drobilla.net/software/serd[https://drobilla.net/software/serd]
> >
> > I once opened an issue for what you are looking for:
> >
> >
> https://github.com/drobilla/serd/issues/12[https://github.com/drobilla/serd/issues/12]
> >
> > currently, I do it like this:
> >
> > #!/usr/bin/env bash
> > set -euo pipefail
> >
> > tdbdump --loc target/tdb | sed '\#example.org#d' | serdi -o ntriples - |
> > gzip --stdout > target/everything.nt.gz
> >
> > In this example I strip out a certain grap, if you don't need that just
> > skip this part. serdi throws away the graph in nquads
> >
> >
> > regards
> >
> > Adrian
> >
>


Is JENA_HOME the same as JENAROOT?

2018-11-22 Thread Martynas Jusevičius
Hi,

the current documentation is unclear IMO:
https://jena.apache.org/documentation/tools/index.html#common-issues-with-running-the-tools

It says $JENAROOT needs to be set, but then proceeds to show how to
check $JENA_HOME.
Which one is it? Or both?

Martynas


Parsing from stdin breaks in 3.9.0

2018-11-22 Thread Martynas Jusevičius
Hi,

I have such a simple Turtle file called kaunas.ttl:

./create-container.sh \
-b https://linkeddatahub.com:4443/demo/city-graph/ \
-f "cert.pem" \
-p "test1234" \
--title "Kaunas" \
--slug "kaunas" \
https://linkeddatahub.com:4443/demo/city-graph/

It parses from command line using 3.6.0:

pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.6.0"
pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
cat kaunas.ttl | turtle
--base="https://linkeddatahub.com:4443/demo/city-graph/";
_:Be9c8204f16652e5daab8df47ee02facf


.
_:Be9c8204f16652e5daab8df47ee02facf  "Kaunas" .
_:Be9c8204f16652e5daab8df47ee02facf
 "kaunas" .

But fails using 3.9.0:

pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.9.0"
pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
cat kaunas.ttl | turtle
--base="https://linkeddatahub.com:4443/demo/city-graph/";
20:16:32 ERROR riot :: [line: 1, col: 1 ] Expected
BNode or IRI: Got: [DIRECTIVE:prefix]
pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$

What is happening here? Can't use 3.9.0 as all our CLI scripts would break.

Even stranger that passing the file as an argument and not a stream
works on both versions:

turtle --base="https://linkeddatahub.com:4443/demo/city-graph/"; kaunas.ttl

The system is Ubuntu 16.04.3 LTS (Windows Subsystem for Linux).

Martynas
atomgraph.com


Re: Parsing from stdin breaks in 3.9.0

2018-11-22 Thread Martynas Jusevičius
Sorry, this is the correct Turtle file:

@prefix def: .
@prefix dct:<http://purl.org/dc/terms/> .
@prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy/domain#> .
_:container a def:Container .
_:container dct:title "Kaunas" .
_:container dh:slug "kaunas" .
On Thu, Nov 22, 2018 at 9:20 PM Martynas Jusevičius
 wrote:
>
> Hi,
>
> I have such a simple Turtle file called kaunas.ttl:
>
> ./create-container.sh \
> -b https://linkeddatahub.com:4443/demo/city-graph/ \
> -f "cert.pem" \
> -p "test1234" \
> --title "Kaunas" \
> --slug "kaunas" \
> https://linkeddatahub.com:4443/demo/city-graph/
>
> It parses from command line using 3.6.0:
>
> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.6.0"
> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> cat kaunas.ttl | turtle
> --base="https://linkeddatahub.com:4443/demo/city-graph/";
> _:Be9c8204f16652e5daab8df47ee02facf
> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
> <https://linkeddatahub.com:4443/demo/city-graph/ns/default#Container>
> .
> _:Be9c8204f16652e5daab8df47ee02facf <http://purl.org/dc/terms/title> "Kaunas" 
> .
> _:Be9c8204f16652e5daab8df47ee02facf
> <https://www.w3.org/ns/ldt/document-hierarchy/domain#slug> "kaunas" .
>
> But fails using 3.9.0:
>
> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.9.0"
> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> cat kaunas.ttl | turtle
> --base="https://linkeddatahub.com:4443/demo/city-graph/";
> 20:16:32 ERROR riot :: [line: 1, col: 1 ] Expected
> BNode or IRI: Got: [DIRECTIVE:prefix]
> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
>
> What is happening here? Can't use 3.9.0 as all our CLI scripts would break.
>
> Even stranger that passing the file as an argument and not a stream
> works on both versions:
>
> turtle --base="https://linkeddatahub.com:4443/demo/city-graph/"; kaunas.ttl
>
> The system is Ubuntu 16.04.3 LTS (Windows Subsystem for Linux).
>
> Martynas
> atomgraph.com


Re: Parsing from stdin breaks in 3.9.0

2018-11-22 Thread Martynas Jusevičius
cat kaunas.ttl | riot --syntax ttl
--base="https://linkeddatahub.com:4443/demo/city-graph/";

That worked. trig etc. scripts are broken re. stdin input though.
On Thu, Nov 22, 2018 at 10:55 PM Andy Seaborne  wrote:
>
>  >> What is happening here? Can't use 3.9.0 as all our CLI scripts would
> break.
>
> You can use "riot --syntax ttl"
>
>      Andy
>
> On 22/11/2018 20:22, Martynas Jusevičius wrote:
> > Sorry, this is the correct Turtle file:
> >
> > @prefix def: .
> > @prefix dct:<http://purl.org/dc/terms/> .
> > @prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy/domain#> .
> > _:container a def:Container .
> > _:container dct:title "Kaunas" .
> > _:container dh:slug "kaunas" .
> > On Thu, Nov 22, 2018 at 9:20 PM Martynas Jusevičius
> >  wrote:
> >>
> >> Hi,
> >>
> >> I have such a simple Turtle file called kaunas.ttl:
> >>
> >> ./create-container.sh \
> >> -b https://linkeddatahub.com:4443/demo/city-graph/ \
> >> -f "cert.pem" \
> >> -p "test1234" \
> >> --title "Kaunas" \
> >> --slug "kaunas" \
> >> https://linkeddatahub.com:4443/demo/city-graph/
> >>
> >> It parses from command line using 3.6.0:
> >>
> >> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >> export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.6.0"
> >> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >> cat kaunas.ttl | turtle
> >> --base="https://linkeddatahub.com:4443/demo/city-graph/";
> >> _:Be9c8204f16652e5daab8df47ee02facf
> >> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
> >> <https://linkeddatahub.com:4443/demo/city-graph/ns/default#Container>
> >> .
> >> _:Be9c8204f16652e5daab8df47ee02facf <http://purl.org/dc/terms/title> 
> >> "Kaunas" .
> >> _:Be9c8204f16652e5daab8df47ee02facf
> >> <https://www.w3.org/ns/ldt/document-hierarchy/domain#slug> "kaunas" .
> >>
> >> But fails using 3.9.0:
> >>
> >> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >> export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.9.0"
> >> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >> cat kaunas.ttl | turtle
> >> --base="https://linkeddatahub.com:4443/demo/city-graph/";
> >> 20:16:32 ERROR riot :: [line: 1, col: 1 ] Expected
> >> BNode or IRI: Got: [DIRECTIVE:prefix]
> >> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >>
> >> What is happening here? Can't use 3.9.0 as all our CLI scripts would break.
> >>
> >> Even stranger that passing the file as an argument and not a stream
> >> works on both versions:
> >>
> >> turtle --base="https://linkeddatahub.com:4443/demo/city-graph/"; kaunas.ttl
> >>
> >> The system is Ubuntu 16.04.3 LTS (Windows Subsystem for Linux).
> >>
> >> Martynas
> >> atomgraph.com


Re: Parsing from stdin breaks in 3.9.0

2018-11-22 Thread Martynas Jusevičius
I can change our CLI scripts, not a problem.

But my question is, if the per-syntax scripts still exist in 3.9.0,
why are they not parsing from stdin (anymore)? Makes no sense like
this. I would say it's a bug :)
On Thu, Nov 22, 2018 at 11:05 PM Andy Seaborne  wrote:
>
> Sure - same issue. "riot --syntax trix". The issue is in "riot", the
> superclass of turtle, trix etc.
>
> But why not
>
> riot kaunas.ttl
>
> ?
>
>  Andy
>
> On 22/11/2018 21:58, Martynas Jusevičius wrote:
> > cat kaunas.ttl | riot --syntax ttl
> > --base="https://linkeddatahub.com:4443/demo/city-graph/";
> >
> > That worked. trig etc. scripts are broken re. stdin input though.
> > On Thu, Nov 22, 2018 at 10:55 PM Andy Seaborne  wrote:
> >>
> >>   >> What is happening here? Can't use 3.9.0 as all our CLI scripts would
> >> break.
> >>
> >> You can use "riot --syntax ttl"
> >>
> >>   Andy
> >>
> >> On 22/11/2018 20:22, Martynas Jusevičius wrote:
> >>> Sorry, this is the correct Turtle file:
> >>>
> >>> @prefix def: .
> >>> @prefix dct:<http://purl.org/dc/terms/> .
> >>> @prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy/domain#> .
> >>> _:container a def:Container .
> >>> _:container dct:title "Kaunas" .
> >>> _:container dh:slug "kaunas" .
> >>> On Thu, Nov 22, 2018 at 9:20 PM Martynas Jusevičius
> >>>  wrote:
> >>>>
> >>>> Hi,
> >>>>
> >>>> I have such a simple Turtle file called kaunas.ttl:
> >>>>
> >>>> ./create-container.sh \
> >>>> -b https://linkeddatahub.com:4443/demo/city-graph/ \
> >>>> -f "cert.pem" \
> >>>> -p "test1234" \
> >>>> --title "Kaunas" \
> >>>> --slug "kaunas" \
> >>>> https://linkeddatahub.com:4443/demo/city-graph/
> >>>>
> >>>> It parses from command line using 3.6.0:
> >>>>
> >>>> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >>>> export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.6.0"
> >>>> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >>>> cat kaunas.ttl | turtle
> >>>> --base="https://linkeddatahub.com:4443/demo/city-graph/";
> >>>> _:Be9c8204f16652e5daab8df47ee02facf
> >>>> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type>
> >>>> <https://linkeddatahub.com:4443/demo/city-graph/ns/default#Container>
> >>>> .
> >>>> _:Be9c8204f16652e5daab8df47ee02facf <http://purl.org/dc/terms/title> 
> >>>> "Kaunas" .
> >>>> _:Be9c8204f16652e5daab8df47ee02facf
> >>>> <https://www.w3.org/ns/ldt/document-hierarchy/domain#slug> "kaunas" .
> >>>>
> >>>> But fails using 3.9.0:
> >>>>
> >>>> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >>>> export JENA_HOME="/mnt/c/Users/pumba/WebRoot/apache-jena-3.9.0"
> >>>> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >>>> cat kaunas.ttl | turtle
> >>>> --base="https://linkeddatahub.com:4443/demo/city-graph/";
> >>>> 20:16:32 ERROR riot :: [line: 1, col: 1 ] Expected
> >>>> BNode or IRI: Got: [DIRECTIVE:prefix]
> >>>> pumba@LAPTOP-BL3MCU0O:/mnt/c/Users/pumba/WebRoot/AtomGraph/LinkedDataHub/scripts$
> >>>>
> >>>> What is happening here? Can't use 3.9.0 as all our CLI scripts would 
> >>>> break.
> >>>>
> >>>> Even stranger that passing the file as an argument and not a stream
> >>>> works on both versions:
> >>>>
> >>>> turtle --base="https://linkeddatahub.com:4443/demo/city-graph/"; 
> >>>> kaunas.ttl
> >>>>
> >>>> The system is Ubuntu 16.04.3 LTS (Windows Subsystem for Linux).
> >>>>
> >>>> Martynas
> >>>> atomgraph.com


Re: INSERT INTO

2018-11-24 Thread Martynas Jusevičius
That document predates the specification. It's not the standard. The
spec on updates is here:
https://www.w3.org/TR/2013/REC-sparql11-update-20130321/
On Sat, Nov 24, 2018 at 7:33 PM Laura Morales  wrote:
>
> Sometimes I see "INSERT INTO" queries like here for example 
> https://www.w3.org/Submission/SPARQL-Update/
> These don't seem to work with jena
>
> org.apache.jena.query.QueryParseException: Encountered " "into" "INTO "
> Was expecting:
> "{" ...
>
> at 
> org.apache.jena.sparql.lang.ParserSPARQL11Update._parse(ParserSPARQL11Update.java:63)
> at 
> org.apache.jena.sparql.lang.ParserSPARQL11Update.parse$(ParserSPARQL11Update.java:45)
> at 
> org.apache.jena.sparql.lang.UpdateParser.parse(UpdateParser.java:48)
>
> but my question is... are these instructions part of the SPARQL specification 
> at all? I can't find them described anywhere, all I see are "INSERT" and 
> "INSERT DATA".


Re: JSON & JENA Re: ✅ Literals as subjects Re: Toward easier RDF: a proposal

2018-11-29 Thread Martynas Jusevičius
Marco,

FYI XSLT 3.0 supports JSON transformations: https://www.w3.org/TR/xslt-30/#json

Martynas
On Thu, Nov 29, 2018 at 2:27 PM Marco Neumann  wrote:
>
> good to know that we are on the same page here Adam. With regards to json
> let's limit the scope of the discussion here to the Jena project for now. I
> am not looking at an alternative to JSON-LD, correct my if I am wrong but
> as far as my limited understanding of JSON-LD goes it is a way to
> store/serialize RDF like data in a json format. while my use case would be
> a customer that presents any data in json and wants me to read this into a
> sparql endpoint for further inspection (with bnodes for arrays, warts and
> all). Currently I have to programmatically write transformations to allow
> me to read the data into a Jena store. Can JSON-LD already help me with
> this task?
>
>
>
> On Thu, Nov 29, 2018 at 12:47 PM ajs6f  wrote:
>
> > I agree _very heartily_ with the caution that Andy and Marco are
> > expressing. I've been following the conversation on semantic-...@w3.org
> > and I have yet to hear anything that seems very useful or practical to me.
> >
> > That having been said, and speaking very much as a member of W3C's JSON-LD
> > Working Group, I'm also not ecstatic about setting up an alternative to
> > JSON-LD. Perhaps you could say a little about why it's not a good choice
> > for data access? I would hope that you would be able to equip your generic
> > JSON with a JSON-LD context and roll on without any special new Jena
> > tooling needed... is that not possible (or optimal) for some reason? If
> > it's related to Jena, we can talk about it here, and if it's related to
> > JSON-LD, I'd be very happy to take your concern to the WG.
> >
> > ajs6f
> >
> > > On Nov 29, 2018, at 7:40 AM, Marco Neumann 
> > wrote:
> > >
> > > I agree Andy, there is no need to rush things here and break the API.
> > Maybe
> > > we could provide an in memory model for "Generalized RDF" as sandbox for
> > > people to play with. But what I'd like to see are more bridges to the
> > json
> > > community as it has become the defacto lingua franca for data exchange on
> > > the web now.
> > >
> > > In particular with regards to generic json rather than json-ld. possibly
> > a
> > > generic mapping to a json dataset assembler could work for data access
> > and
> > > transformation. has anybody done anything here already?
> > >
> > >
> > > On Thu, Nov 22, 2018 at 2:11 PM Andy Seaborne  wrote:
> > >
> > >> Internally, that means Graph/Node/Triple and the ARQ engine, Jena really
> > >> works on what is called  "Generalized RDF" in RDF 1.1 - so literals as
> > >> subjects, literals as predicates blank nodes as predicates - work just
> > >> fine.  Whether they are a good idea is a different question.
> > >>
> > >> RDF* works as well (in-memory graphs). Jena has Node_Triple and
> > >> Node_Graph nowadays for completeness.
> > >>
> > >> If we get into structured values (lists not encoded in triples,
> > >> sets/bags as datastructures - and these are things property graphs and
> > >> traditionally SQL also find it hard to handle), there would be work to
> > >> do, but it's not impossible.
> > >>
> > >> The impact is on the Model API is where the impact is.
> > >>
> > >> Personal opinion about changing the core specs:
> > >>
> > >> Being "better"isn't enough.  There is lots of investment in people's
> > >> time and energy has gone in to learning about RDF and communicating
> > >> about RDF.
> > >>
> > >> The impact is when new data meets old apps but also existing
> > >> thinking/learning/blogs/books/...
> > >>
> > >> Changes to the basics need to meet a higher barrier than "better".
> > >>
> > >> Andy
> > >>
> > >> On 22/11/2018 13:13, Marco Neumann wrote:
> > >>> are we prepared in Jena for such a move on the RDF syntax?
> > >>>
> > >>>
> > >>>
> > >>> -- Forwarded message -
> > >>> From: Tim Berners-Lee 
> > >>> Date: Thu, Nov 22, 2018 at 1:05 PM
> > >>> Subject: ✅ Literals as subjects Re: Toward easier RDF: a proposal
> > >>> To: David Booth 
> > >>> Cc: SW-forum Web , Dan Brickley <
> > dan...@google.com
> > >>> ,
> > >>> Sean B. Palmer , Olaf Hartig  > >,
> > >>> Axel Polleres 
> > >>>
> > >>>
> > >>>
> > >>>
> > >>> On 2018-11 -21, at 22:40, David Booth  wrote:
> > >>>
> > >>> 7. Literals as subjects.  RDF should allow "anyone to say
> > >>> anything about anything", but RDF does not currently allow
> > >>> literals as subjects!  (One work-around is to use -- you guessed
> > >>> it -- a blank node, which in turn is asserted to be owl:sameAs
> > >>> the literal.)  This deficiency may seem unimportant relative
> > >>> to other RDF difficulties, but it is a peculiar anomaly that
> > >>> may have greater impact than we realize.  Imagine an *average*
> > >>> developer, new to RDF, who unknowingly violates this rule and
> > >>> is puzzled when it doesn't work.  Negative experiences like
> > >>> that drive people away.  Even more insidiously, imagine this
> 

Re: JSON & JENA Re: ✅ Literals as subjects Re: Toward easier RDF: a proposal

2018-11-29 Thread Martynas Jusevičius
Sure, but you get that if you produce RDF/XML or TriX. I've tried that
and transformed JSON directly into TriX with Saxon.
On Thu, Nov 29, 2018 at 4:41 PM Marco Neumann  wrote:
>
> thanks for the link Martynas, one still has to go to rdf from xml. I find
> the data assembler in jena/d2rq more convenient but am open to new ideas.
>
> btw I never quite know who is posting  on ajs6f hence the misnomer
>
> On Thu 29. Nov 2018 at 14:29, Martynas Jusevičius 
> wrote:
>
> > Marco,
> >
> > FYI XSLT 3.0 supports JSON transformations:
> > https://www.w3.org/TR/xslt-30/#json
> >
> > Martynas
> > On Thu, Nov 29, 2018 at 2:27 PM Marco Neumann 
> > wrote:
> > >
> > > good to know that we are on the same page here Adam. With regards to json
> > > let's limit the scope of the discussion here to the Jena project for
> > now. I
> > > am not looking at an alternative to JSON-LD, correct my if I am wrong but
> > > as far as my limited understanding of JSON-LD goes it is a way to
> > > store/serialize RDF like data in a json format. while my use case would
> > be
> > > a customer that presents any data in json and wants me to read this into
> > a
> > > sparql endpoint for further inspection (with bnodes for arrays, warts and
> > > all). Currently I have to programmatically write transformations to allow
> > > me to read the data into a Jena store. Can JSON-LD already help me with
> > > this task?
> > >
> > >
> > >
> > > On Thu, Nov 29, 2018 at 12:47 PM ajs6f  wrote:
> > >
> > > > I agree _very heartily_ with the caution that Andy and Marco are
> > > > expressing. I've been following the conversation on
> > semantic-...@w3.org
> > > > and I have yet to hear anything that seems very useful or practical to
> > me.
> > > >
> > > > That having been said, and speaking very much as a member of W3C's
> > JSON-LD
> > > > Working Group, I'm also not ecstatic about setting up an alternative to
> > > > JSON-LD. Perhaps you could say a little about why it's not a good
> > choice
> > > > for data access? I would hope that you would be able to equip your
> > generic
> > > > JSON with a JSON-LD context and roll on without any special new Jena
> > > > tooling needed... is that not possible (or optimal) for some reason? If
> > > > it's related to Jena, we can talk about it here, and if it's related to
> > > > JSON-LD, I'd be very happy to take your concern to the WG.
> > > >
> > > > ajs6f
> > > >
> > > > > On Nov 29, 2018, at 7:40 AM, Marco Neumann 
> > > > wrote:
> > > > >
> > > > > I agree Andy, there is no need to rush things here and break the API.
> > > > Maybe
> > > > > we could provide an in memory model for "Generalized RDF" as sandbox
> > for
> > > > > people to play with. But what I'd like to see are more bridges to the
> > > > json
> > > > > community as it has become the defacto lingua franca for data
> > exchange on
> > > > > the web now.
> > > > >
> > > > > In particular with regards to generic json rather than json-ld.
> > possibly
> > > > a
> > > > > generic mapping to a json dataset assembler could work for data
> > access
> > > > and
> > > > > transformation. has anybody done anything here already?
> > > > >
> > > > >
> > > > > On Thu, Nov 22, 2018 at 2:11 PM Andy Seaborne 
> > wrote:
> > > > >
> > > > >> Internally, that means Graph/Node/Triple and the ARQ engine, Jena
> > really
> > > > >> works on what is called  "Generalized RDF" in RDF 1.1 - so literals
> > as
> > > > >> subjects, literals as predicates blank nodes as predicates - work
> > just
> > > > >> fine.  Whether they are a good idea is a different question.
> > > > >>
> > > > >> RDF* works as well (in-memory graphs). Jena has Node_Triple and
> > > > >> Node_Graph nowadays for completeness.
> > > > >>
> > > > >> If we get into structured values (lists not encoded in triples,
> > > > >> sets/bags as datastructures - and these are things property graphs
> > and
> > > > >> traditionally SQL also find it hard to handle), there wou

Re: Distro package

2018-12-23 Thread Martynas Jusevičius
Andy,

this version didn't run for me due to "fuseki-server.jar not found" or
smth like that. I had to introduce WORKDIR to fix the issue.

Committed my version here:
https://github.com/AtomGraph/fuseki-docker/blob/master/Dockerfile

On Mon, Nov 5, 2018 at 2:25 PM Andy Seaborne  wrote:
>
> Sure - will get around to it:
>
> in the meantime, here is my current working version:
>
> The area to work on is where/how databases are managed between docker
> and host. The issue is efficiency of mmap access for TDB, especially for
> non-linux host OS's. I am wondering if VOLUMEs are the way to go.
>
> Any experiments with that would be helpful.
>
>  Andy
>
> ---
>
> # Basic Fuseki Dockerfile.
>
> ## To do:
> # VOLUME and databases
>
> FROM java:8-jdk
>
> LABEL maintainer="The Apache Jena community "
>
> ARG VERSION=3.9.0
> ARG SRC=http://central.maven.org/maven2/org/apache/jena/
> ARG BINARY=jena-fuseki-server/${VERSION}/jena-fuseki-server-${VERSION}.jar
>
> ENV
> URL=http://central.maven.org/maven2/org/apache/jena/jena-fuseki-server/${VERSION}/jena-fuseki-server-${VERSION}.jar
> ENV BASE=/mnt/apache-fuseki
>
> ## VOLUME /mnt/
>
> RUN mkdir -p $BASE && \
>   cd $BASE && \
>   curl --silent --show-error --output fuseki-server.jar $URL
>
> EXPOSE 3030
>
> ENTRYPOINT [ "/usr/bin/java", "-jar", "fuseki-server.jar" ]
>
> ## Command line arguments are those for Fuseki.
> CMD []
>
> ---
>
>
> On 03/11/2018 16:42, Martynas Jusevičius wrote:
> > Andy,
> >
> > can we have this WIP Dockerfile on GitHub somewhere? So we could make
> > pull requests and such.
> >
> > Martynas
> > On Thu, Oct 11, 2018 at 2:06 PM Andy Seaborne  wrote:
> >>
> >> The Dockerfile (which is WIP) is for running the "main" (embedded
> >> server). My need is deployment as a triplestore service, no direct UI.
> >>
> >> You could add FUSEKI_HOME, FUSEKI_BASE setting or add including the
> >> "webapp" directory.
> >>
> >>   Andy
> >>
> >> On 10/10/18 12:12, Martynas Jusevičius wrote:
> >>> Andy,
> >>>
> >>> I successfully built an image using your instructions.
> >>>
> >>> However, I did not succeed in running it:
> >>>
> >>> $ docker run --rm   jena/fuseki --mem //ds
> >>> [2018-10-10 11:03:09] Server INFO  Dataset: in-memory
> >>> [2018-10-10 11:03:09] Server ERROR Can't find resourceBase (tried
> >>> webapp, src/main/webapp, /./webapp and /./src/main/webapp)
> >>> [2018-10-10 11:03:09] Server ERROR Failed to start
> >>>
> >>> The double dash in //ds is escaped because I'm running bash on
> >>> Windows. Also not sure why you had the -ti option there.
> >>>
> >>> Is the current directory somehow off? It would probably need an
> >>> entrypoint script to be able to output or change it.
> >>>
> >>> Is the Dockerfile on GitHub somewhere?
> >>> On Wed, Oct 3, 2018 at 4:00 PM Andy Seaborne  wrote:
> >>>>
> >>>>
> >>>>
> >>>> On 03/10/18 14:58, Martynas Jusevičius wrote:
> >>>>> What about VOLUME for data persistence?
> >>>>
> >>>> WIP!
> >>>> Send a suggested modification!
> >>>>
> >>>>> On Wed, Oct 3, 2018 at 3:51 PM Andy Seaborne  wrote:
> >>>>>>
> >>>>>> The cycle time on a *nix distribution is long, and has a long tail. 
> >>>>>> That
> >>>>>> in turn can create costs on the project (so if people want step up ...)
> >>>>>>
> >>>>>> A Dockerfile can be part of the release process.
> >>>>>>
> >>>>>> A simple Dockerfile is possible - some WIP trying to be minimal:
> >>>>>>
> >>>>>> ---
> >>>>>> # Basic Fuseki Dockerfile.
> >>>>>> #
> >>>>>> # Assumes:
> >>>>>> # 1 - fuseki-server.jar
> >>>>>> # 2 - log4j.properties
> >>>>>>
> >>>>>> # Build: docker image build -t jena/fuseki .
> >>>>>> # Run:   docker run -it --rm   jena/fuseki --mem /ds
> >>>>>> #   Add "-d" to run in t

  1   2   3   4   5   6   7   8   >