Re: GeoSparql example?

2023-12-29 Thread Claude Warren
I ended up using a different form of the query.  I might have to revisit
the question to see if I can make it more performant, but I did get a
working solution.

On Sat, Dec 2, 2023 at 3:20 PM Marco Neumann 
wrote:

> Did that spatial SPARQL query work for you Claude?
>
> Marco
>
> On Fri, Dec 1, 2023 at 8:08 PM Claude Warren  wrote:
>
> > can you give me an example of a query?
> >
> > On Fri, Dec 1, 2023, 19:14 Marco Neumann 
> wrote:
> >
> > > just go ahead you are almost there
> > >
> > >  wkt:asWKT "Polygon (( -5.5 -5.5, -4.5 -5.5, -4.5 -4.5, -5.5 -4.5, -5.5
> > > -5.5  ))"^^wkt:wktLiteral
> > >
> > > same with the LINESTRING
> > >
> > > Marco
> > >
> > > On Fri, Dec 1, 2023 at 6:03 PM Claude Warren  wrote:
> > >
> > > > I am playing with GeoSparql for the fist time and I am trying to find
> > an
> > > > example of how to format the data.
> > > >
> > > > I have a polygon:
> > > > POLYGON ((-5.5 -5.5, -4.5 -5.5, -4.5 -4.5, -5.5 -4.5, -5.5 -5.5))
> > > >
> > > > and a linestring:
> > > > LINESTRING (-1 -3, -1 -2)
> > > >
> > > > Using the jena-geosparql module what is the SPARQL insert statement
> to
> > > > place the polygon into a model or dataset?
> > > >
> > > > Once the polygon is in, what is the query that will do the equivalent
> > of
> > > > the jst Geometry.iswithinDistance between  the Linestring and the
> > > Polygon?
> > > >
> > > > Thanks,
> > > > Claude
> > > >
> > > > --
> > > > LinkedIn: http://www.linkedin.com/in/claudewarren
> > > >
> > >
> > >
> > > --
> > >
> > >
> > > ---
> > > Marco Neumann
> > >
> >
>
>
> --
>
>
> ---
> Marco Neumann
>


-- 
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: GeoSparql example?

2023-12-01 Thread Claude Warren
can you give me an example of a query?

On Fri, Dec 1, 2023, 19:14 Marco Neumann  wrote:

> just go ahead you are almost there
>
>  wkt:asWKT "Polygon (( -5.5 -5.5, -4.5 -5.5, -4.5 -4.5, -5.5 -4.5, -5.5
> -5.5  ))"^^wkt:wktLiteral
>
> same with the LINESTRING
>
> Marco
>
> On Fri, Dec 1, 2023 at 6:03 PM Claude Warren  wrote:
>
> > I am playing with GeoSparql for the fist time and I am trying to find an
> > example of how to format the data.
> >
> > I have a polygon:
> > POLYGON ((-5.5 -5.5, -4.5 -5.5, -4.5 -4.5, -5.5 -4.5, -5.5 -5.5))
> >
> > and a linestring:
> > LINESTRING (-1 -3, -1 -2)
> >
> > Using the jena-geosparql module what is the SPARQL insert statement to
> > place the polygon into a model or dataset?
> >
> > Once the polygon is in, what is the query that will do the equivalent of
> > the jst Geometry.iswithinDistance between  the Linestring and the
> Polygon?
> >
> > Thanks,
> > Claude
> >
> > --
> > LinkedIn: http://www.linkedin.com/in/claudewarren
> >
>
>
> --
>
>
> ---
> Marco Neumann
>


GeoSparql example?

2023-12-01 Thread Claude Warren
I am playing with GeoSparql for the fist time and I am trying to find an
example of how to format the data.

I have a polygon:
POLYGON ((-5.5 -5.5, -4.5 -5.5, -4.5 -4.5, -5.5 -4.5, -5.5 -5.5))

and a linestring:
LINESTRING (-1 -3, -1 -2)

Using the jena-geosparql module what is the SPARQL insert statement to
place the polygon into a model or dataset?

Once the polygon is in, what is the query that will do the equivalent of
the jst Geometry.iswithinDistance between  the Linestring and the Polygon?

Thanks,
Claude

-- 
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Build Full Text Search query using SelectBuilder

2022-05-30 Thread Claude Warren


On 24/05/2022 11:01, Goławski, Paweł wrote:


PREFIX   ex: http://www.example.org/resources# 



PREFIX rdfs: http://www.w3.org/2000/01/rdf-schema# 



PREFIX text: http://jena.apache.org/text# 

SELECT ?s ?lbl

WHERE {

?s a ex:Product ;

   text:query (rdfs:label 'printer');

rdfs:label ?lbl

}



To generate the above the following should work:

    final SelectBuilder builder = new SelectBuilder();
    final Query query = builder.setDistinct(true)
        .addPrefix("ex","http://www.example.org/resources#";)
    .addPrefix("rdf", RDF.uri).addPrefix("rdfs", RDFS.uri)
    .addPrefix("text","http://jena.apache.org/text#";)
    .addVar("s")
    .addVar("lbl")
    .addWhere("?s", "rdf:type", "ex:Product ")
    .addWhere("?s", "text:query", builder.list("rdfs:label", 
"printer"))
    .addWhere("?s", "rdfs:label", "?lbl")
    .build();

There are a couple of changes from your code:

1. I removed the makeVar() calls and replaced with "?var".
2. Used the build list() method to create the list of variables.
3. Added the prefixes by hand since I didn't have your variable.

The query can also be written using Vars as:

    final Var s = Converters.makeVar( "s" );
final Var lbl = Converters.makeVar( "lbl" );
final SelectBuilder builder = new SelectBuilder();
    final Query query = builder.setDistinct(true)
        .addPrefix("ex","http://www.example.org/resources#";)
    .addPrefix("rdf", RDF.uri).addPrefix("rdfs", RDFS.uri)
    .addPrefix("text","http://jena.apache.org/text#";)
    .addVar(s)
    .addVar(lbl)
    .addWhere(s, "rdf:type", "ex:Product ")
    .addWhere(s, "text:query", builder.list("rdfs:label", 
"printer"))
    .addWhere(s, "rdfs:label", lbl)
    .build();

Hope this helps,
Claude


Re: Adding subquery to construct query with the query builder

2021-07-03 Thread Claude Warren



On 2021/04/02 11:02:21, Daniel Jeller  wrote: 
> Hi,
> 
> I’m trying to create a CONSTRUCT query using the Jena query builder and I 
> couldn’t figure out how to correctly add a subquery. I’ve tried to create 
> code similar to the following but in the end the expected “Select *” clause 
> is missing, only the where parts are present.
> 
> constructBuilder
> .addConstruct(construct-clauses)
> .addWhere(
> whereBuilder
> .addSubquery(
> SelectBuilder
> .addVar(“*”)
> .addWhere(sub-where)
> )
> .addWhere(outer-where)
> )
> 
> What I’m trying to get is
> 
> CONSTRUCT {
>construct-clauses
> }
> WHERE {
> SELECT * WHERE { sub-where}
> outer-where
> }
> 
> But what I’m really getting is
> 
> CONSTRUCT {
>construct-clauses
> }
> WHERE {
> WHERE { sub-where}
> outer-where
> }
> 
> The “Select *” is missing. Am I doing something wrong with my query builder 
> usage?
> 
> Any help is appreciated! Thanks in advance.
> 
> Daniel
> --
> Mag. Daniel Jeller
> Monasterium / Digitisation / IT
> 
> ICARUS
> --
> Spaces Central Station
> Gertrude-Fröhlich-Sander Str. 2-4
> Tower C, Floor 7-9
> A-1100 Vienna
> AUSTRIA
> 
> Web: http://icar-us.eu
> Platforms: http://monasterium.net, 
> http//matricula.info
> Join the ICARUS friends‘ association: 
> http://4all.icar-us.eu
> 
> 

Daniel,

Sorry it has taken so long to respond.

There is a pull request to fix this: https://github.com/apache/jena/pull/1026

With the fixes your construction should be simplified.

> constructBuilder
> .addConstruct(construct-clauses)
> .addWhere(
> whereBuilder
> .addSubquery(
> SelectBuilder
> .addVar(“*”)
> .addWhere(sub-where)
> )
> .addWhere(outer-where)
> )
> 

could be written as

new ConstructBuilder()
.addConstruct( construct-clauses )
.addSubquery( new SelectBuilder().addVar("*").addWhere( sub-where ) )
.addWhere( outer-where )

or simplified even further

new ConstructBuilder()
.addConstruct( construct-clauses )
.addSubquery( new WhereBuilder().addWhere( sub-where ) )
.addWhere( outer-where )

Claude


ApacheCon Call for Papers

2021-03-09 Thread Claude Warren
Greetings,

We have an RDF / Linked Data track at ApacheCon this year.  If you are
working on a project that uses Jena we would like to hear from you.

Please submit paper proposals at https://acah2021.jamhosted.net/

Thank you,
Claude


ApacheCon@home 2020 - Semantic Graph BoF

2020-09-02 Thread Claude Warren
Greetings,

ApacheCon is almost upon us.  This year it is online and free.  So please
make plans to attend.

This year Apache Jena is hosting a Semantic Graph "Birds of a Feather"
session[1] as part of the Jena track.  Please come join us and discuss all
things Semantic Graph.

[1] https://www.apachecon.com/acah2020/tracks/jena.html


Jena ApacheCon Talks

2020-08-31 Thread Claude Warren
Jena has several talks at ApachCon this year.  It is is being held online
and it is free.  Please take a look at the track schedule:
https://www.apachecon.com/acah2020/tracks/jena.html

Claude


Re: Java heap space

2020-02-16 Thread Claude Warren
Luis,

Did you solve this problem?  Did you try setting the -Xmx property on the
Java command line?  I don't recall what the default is but you could set it
to something like -Xmx:1g to allocate 1 gig of memory.

Claude


On Wed, Feb 5, 2020 at 12:24 PM Luis Enrique Ramos García
 wrote:

> >
> > Dear members of jena community
> >
>
> I am working with two ontologies, merged in a model, the size of the
> ontologies is as follows:
>
> *ontology1* 2,7 Mbytes, 25K axioms and 6000 individuals
>
> *ontology2* 8,1 Mbytes, 72 K axioms, and 9000 items,
>
> I implemented a rule that checks if an item of o1 has the same label of an
> item of o2, with a generic rule reasoner of jena
>
> when I use less items in ontology 2,  3000 and 6000 i get the result in 7,2
> and 21, sec, but with 9000 items, I get the following error:
>
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at
>
> org.apache.jena.reasoner.rulesys.impl.BindingVectorMultiSet.getSubSet(BindingVectorMultiSet.java:144)
> at org.apache.jena.reasoner.rulesys.impl.RETEQueue.fire(RETEQueue.java:114)
> at org.apache.jena.reasoner.rulesys.impl.RETEQueue.fire(RETEQueue.java:128)
> at org.apache.jena.reasoner.rulesys.impl.RETEQueue.fire(RETEQueue.java:128)
> at
>
> org.apache.jena.reasoner.rulesys.impl.RETEClauseFilter.fire(RETEClauseFilter.java:227)
> at
>
> org.apache.jena.reasoner.rulesys.impl.RETEEngine.inject(RETEEngine.java:492)
> at
>
> org.apache.jena.reasoner.rulesys.impl.RETEEngine.runAll(RETEEngine.java:474)
> at
>
> org.apache.jena.reasoner.rulesys.impl.RETEEngine.fastInit(RETEEngine.java:163)
> at
>
> org.apache.jena.reasoner.rulesys.FBRuleInfGraph.prepare(FBRuleInfGraph.java:471)
> at
>
> org.apache.jena.reasoner.rulesys.BasicForwardRuleInfGraph.getDeductionsGraph(BasicForwardRuleInfGraph.java:392)
> at
>
> org.apache.jena.rdf.model.impl.InfModelImpl.getDeductionsModel(InfModelImpl.java:169)
>
>
> My question is if there is a way to solve this memory issue in jena side?,
> given that the available memory of my computer is 1,2 gb, and I think that
> should not be the cause, however I could be wrong.
>
> Any support is thanked
>
>
> Luis Ramos
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Fuseki - SELECT during INSERT

2020-01-21 Thread Claude Warren
Barry,

Would you be willing to try the following:

If the security policy queries can be run directly against the TDB graph
(i.e. no reasoning required) modify your SecurityEvaluator to query that
directly, otherwise modify the SecurityEvaluator to query the infGraph
directly.  In other words have the SecurityEvaluator bypass the security
system.

Thx,
Claude

On Tue, Jan 21, 2020 at 7:54 AM Nouwt, B. (Barry)
 wrote:

> Hi Claude/Andy, thanks for the responses.
>
> @Claude: Below a shortened version of the configuration we are using
> (untested). During the processing of an INSERT query, we are firing a new
> SELECT sparql http request to the same secured dataset. We use a user that
> has unlimited permissions to prevent indefinite permission checking but it
> is indeed very slow which is not a problem for now. Our permissions
> structure uses graph patterns (BGPs) to encode which types of triples users
> have or have no access to, so I think this means no reification is being
> done. The requested (or inserted) triples (including some context) are
> being matched to the graph patterns in de security policy and the decision
> about access is being taken based on that.
>
> @Andy: we haven't tested thoroughly yet, but I suspect the CPU is not
> doing anything and just waiting for the INSERT query to finish (which does
> not finish until the SELECT query finishes). So, if SELECT queries are
> indeed being postponed until the INSERT query is finished, this is a
> deadlock situation. I'll see if we can make a threaddump to clarify things.
>
> Again, thanks for your responses and thanks for Apache Jena!
>
> Regards, Barry
>
> - config.ttl
> 
>
> @prefix : <http://www.example.org/#> .
> @prefix fuseki: <http://jena.apache.org/fuseki#> .
> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
> @prefix tdb: <http://jena.hpl.hp.com/2008/tdb#> .
> @prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#> .
> @prefix perm: <http://apache.org/jena/permissions/Assembler#> .
>
>
> <#service1> rdf:type fuseki:Service ;
> fuseki:name   "ds" ;
> fuseki:serviceQuery   "sparql" ;   # SPARQL query service
> fuseki:serviceQuery   "query" ;# SPARQL query service
> (alt name)
> fuseki:serviceUpdate  "update" ;   # SPARQL update service
> fuseki:serviceUpload  "upload" ;   # Non-SPARQL upload
> service
> fuseki:serviceReadWriteGraphStore "data" ; # SPARQL Graph store
> protocol (read and write)
> # A separate read-only graph store endpoint:
> fuseki:serviceReadGraphStore  "get" ;  # SPARQL Graph store
> protocol (read only)
> fuseki:dataset   :dataset ;
> .
>
> perm:Model rdfs:subClassOf ja:NamedModel .
> tdb:DatasetTDB rdfs:subClassOf ja:RDFDataset .
> tdb:GraphTDB rdfs:subClassOf ja:Model .
>
> :dataset a ja:RDFDataset ;
> ja:defaultGraph <#securedGraph> ;
> .
>
> <#securedGraph> rdf:type perm:Model ;
> perm:baseModel <#infGraph> ;
> ja:modelName "https://www.example.org/securedModel"; ;
> perm:evaluatorImpl <#secEvaluator> ;
> .
>
> <#secEvaluator> rdf:type perm:Evaluator ;
> perm:args [
> rdf:_1 "http://fuseki:3030/ds/query"; ;
> ] ;
> perm:evaluatorClass "nl.example.MySecurityEvaluator"
> .
>
> <#infGraph> a ja:InfModel ;
> ja:baseModel <#tdbGraph> ;
> ja:reasoner [
> ja:rulesFrom  ;
> ] ;
> .
>
> <#tdbGraph> rdf:type tdb:GraphTDB ;
> tdb:location "DB" ;
> .
>
> -Original Message-
> From: Andy Seaborne 
> Sent: maandag 20 januari 2020 22:51
> To: users@jena.apache.org
> Subject: Re: Fuseki - SELECT during INSERT
>
> Hi Barry,
>
> "hangs indefinitely" -- is this the CPU is doing nothing or the CPU is
> doing work but never finishes?
>
> If it is the former, CPU doing nothing, what would be useful is a JVM
> threaddump. Should be one thread that is stuck (I'm assuming the SELECT
> runs on the same thread as the INSERT and also this deadlocks everytime,
> not, for example, when another request is happening at  the same time).
>
> Related to Claude's point. TDB has multiple-reader-and-single-writer
> (MR+SW) transactions but general purpose datasets have a multiple-reader-
> *or*-single-writer (MRSW) lock to work in the general case without k

Re: Fuseki - SELECT during INSERT

2020-01-20 Thread Claude Warren
Barry,

I just realised you said "Select against the same dataset".  Are you
selecting against an unrestricted model/graph? If you query a graph with
permissions to determine the permissions you can get into a situation where
things are running _very_ slowly.

How have you designed your permissions structure?  Are you reifying the
triples and then granting access to the reified nodes?  If so this is an
extremely processor intensive way of doing permissions checking.  The issue
being that your permissions graph will be larger than 3x the size of the
graph you are protecting.

Claude

On Mon, Jan 20, 2020 at 3:10 PM Claude Warren  wrote:

> Barry,
>
> Can you provide the configuration for the Fuseki server?  I need to know
> how the dataset(s) are constructed.
>
> Claude
>
> On Mon, Jan 20, 2020 at 11:10 AM Claude Warren  wrote:
>
>> I am not certain if the lock is the reason but am providing more
>> background on the permissions processing so someone with more dataset
>> experience can answer.
>>
>> To use the permissions on a dataset requires that the dataset be
>> constructed from individual models.  As each of the models would have to
>> have permissions assigned.  I put this out there because I know that TDB
>> has an internal dataset implementation and I want to make sure that we only
>> look in the stand alone dataset implementations.
>>
>> Claude
>>
>> On Mon, Jan 20, 2020 at 10:46 AM Nouwt, B. (Barry)
>>  wrote:
>>
>>> Hi all,
>>>
>>> we have a Security related scenario where whenever an INSERT query gets
>>> executed on our Fuseki dataset, we intercept the execution of this query
>>> (using Jena Permissions and its Security Evaluator) and during this
>>> interception we execute a SELECT query to the same dataset. Whenever we did
>>> this during a SELECT query (instead of an INSERT query), there was no
>>> problem, but when we do it during a INSERT query, it seems like the SELECT
>>> query hangs indefinitely. Could this be caused by a lock of the INSERT on
>>> that dataset?
>>>
>>> Regards, Barry
>>> This message may contain information that is not intended for you. If
>>> you are not the addressee or if this message was sent to you by mistake,
>>> you are requested to inform the sender and delete the message. TNO accepts
>>> no liability for the content of this e-mail, for the manner in which you
>>> use it and for damage of any kind resulting from the risks inherent to the
>>> electronic transmission of messages.
>>>
>>
>>
>> --
>> I like: Like Like - The likeliest place on the web
>> <http://like-like.xenei.com>
>> LinkedIn: http://www.linkedin.com/in/claudewarren
>>
>
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Fuseki - SELECT during INSERT

2020-01-20 Thread Claude Warren
Barry,

Can you provide the configuration for the Fuseki server?  I need to know
how the dataset(s) are constructed.

Claude

On Mon, Jan 20, 2020 at 11:10 AM Claude Warren  wrote:

> I am not certain if the lock is the reason but am providing more
> background on the permissions processing so someone with more dataset
> experience can answer.
>
> To use the permissions on a dataset requires that the dataset be
> constructed from individual models.  As each of the models would have to
> have permissions assigned.  I put this out there because I know that TDB
> has an internal dataset implementation and I want to make sure that we only
> look in the stand alone dataset implementations.
>
> Claude
>
> On Mon, Jan 20, 2020 at 10:46 AM Nouwt, B. (Barry)
>  wrote:
>
>> Hi all,
>>
>> we have a Security related scenario where whenever an INSERT query gets
>> executed on our Fuseki dataset, we intercept the execution of this query
>> (using Jena Permissions and its Security Evaluator) and during this
>> interception we execute a SELECT query to the same dataset. Whenever we did
>> this during a SELECT query (instead of an INSERT query), there was no
>> problem, but when we do it during a INSERT query, it seems like the SELECT
>> query hangs indefinitely. Could this be caused by a lock of the INSERT on
>> that dataset?
>>
>> Regards, Barry
>> This message may contain information that is not intended for you. If you
>> are not the addressee or if this message was sent to you by mistake, you
>> are requested to inform the sender and delete the message. TNO accepts no
>> liability for the content of this e-mail, for the manner in which you use
>> it and for damage of any kind resulting from the risks inherent to the
>> electronic transmission of messages.
>>
>
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Fuseki - SELECT during INSERT

2020-01-20 Thread Claude Warren
I am not certain if the lock is the reason but am providing more background
on the permissions processing so someone with more dataset experience can
answer.

To use the permissions on a dataset requires that the dataset be
constructed from individual models.  As each of the models would have to
have permissions assigned.  I put this out there because I know that TDB
has an internal dataset implementation and I want to make sure that we only
look in the stand alone dataset implementations.

Claude

On Mon, Jan 20, 2020 at 10:46 AM Nouwt, B. (Barry)
 wrote:

> Hi all,
>
> we have a Security related scenario where whenever an INSERT query gets
> executed on our Fuseki dataset, we intercept the execution of this query
> (using Jena Permissions and its Security Evaluator) and during this
> interception we execute a SELECT query to the same dataset. Whenever we did
> this during a SELECT query (instead of an INSERT query), there was no
> problem, but when we do it during a INSERT query, it seems like the SELECT
> query hangs indefinitely. Could this be caused by a lock of the INSERT on
> that dataset?
>
> Regards, Barry
> This message may contain information that is not intended for you. If you
> are not the addressee or if this message was sent to you by mistake, you
> are requested to inform the sender and delete the message. TNO accepts no
> liability for the content of this e-mail, for the manner in which you use
> it and for damage of any kind resulting from the risks inherent to the
> electronic transmission of messages.
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: programmatically construct UpdateDeleteInsert

2019-12-07 Thread Claude Warren
FYI  QueryBuilder framework has methods to programmatically build Update
requests as well as queries?



On Thu, Dec 5, 2019 at 4:25 PM Paul Tyson  wrote:

> Never mind. I missed:
>
> QuadAcc deletes = upd.getDeleteAcc();
> deletes.addTriple(/* triple pattern to delete */);
>
> Regards,
> --Paul
>
>
> On Thu, 2019-12-05 at 09:12 -0600, Paul Tyson wrote:
> > I'm trying to construct a SPARQL update like:
> >
> > DELETE {?r ?p ?o. ?s ?p1 ?r.}
> > WHERE {?r ex:foo "123"; ex:bar "456"; ?p ?o.
> >   OPTIONAL {?s ?p1 ?r}
> > }
> >
> > In other words, delete all triples with subject or object resource that
> > has certain ex:foo and ex:bar values.
> >
> > I can't see how to set or modify the DELETE pattern in an UpdateModify
> > (or UpdateDeleteInsert) object. I'm guessing a visitor must be used,
> > but I can't put the pieces together. Can someone point out the right
> > pattern?
> >
> > UpdateModify upd = new UpdateModify();
> > upd.setHasDeleteClause(true);
> > upd.setHasInsertClause(false);
> > upd.setElement(/* set the where block */);
> > upd.visit(new UpdateVisitorBase() {
> >   @Override
> >   public void visit(UpdateModify um) {
> > /* somehow add the DELETE triples??? */
> >   }
> >  });
> >
> > I intend to use the built-up request in an UpdateRequest to be
> > submitted  by an RDFConnection, like:
> >
> > UpdateRequest req = UpdateFactory.create().add(upd);
> > rdfConnection.update(req);
> >
> > I'm using jena libs 3.13.1.
> >
> > Thanks and regards,
> > --Paul
> >
> >
>
>
>

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: jena-maven-tools still has at least ONE stalwart fan!

2019-11-28 Thread Claude Warren
I have written Maven plugins in the past and I had one project that used
the schemagen plugin.  I have not had the drive to maintain schemagen and
so have never stepped forward on this topic, but if someone steps up to
drive the maintenance I would  be happy to contribute my knowledge and a
bit of time, not htat hI have much spare time anymore.,

Claude

On Thu, Nov 28, 2019 at 1:46 PM Andy Seaborne  wrote:

> Hi Stu,
>
> jena-maven-tools has not been released since 3.6.0 because its tests
> don't pass after the upgrade to the Apache parent POM v19 JENA-1474.
>
> schemagen itself is not affected - it's just the maven plugin.
>
> As you note, there hasn't been much interest and no one has stepped up
> to fix and maintain it. All code needs maintenance, community discussion
> and oversight. The environment changes, Java is now releasing to a
> faster schedule, new users/uses are made so there is no such thing as
> completed software.
>
> Sorry - that is the reality of open source software!
>
> If you or anyone else is willing to do that, it can be brought back but
> the current active contributors.
>
> Or as 2B, fork it.
>
> Maybe there is a lighter weight way to provide the functionality just
> invoking schemagen (might even be as "documentation" if using the exec
> plugin). i.e. 2D - replace with a sustainable solution.
>
> 3.6.0 isn't going to be removed from either the archives of the Jena
> release area - http://archive.apache.org/dist/jena/. Versions are never
> removed or replaced.
>
> There are no plans to remove from central.maven.org either (which is run
> by Sonatype).
>
>  Andy
>
>
> On 26/11/2019 08:32, Stu B22 wrote:
> >
> > Hello Jena Community,
> >
> > I write regarding the recently retired 'jena-maven-tools' sub-project.
> It provided
> > a maven plugin used to cleanly execute the 'jena-schemagen' tool during
> a maven build.
> > I found this tool very useful for generating .java constants from domain
> ontologies.
> > Example:  I edit a file like GoodDomainOnto_owl2.ttl in my (maven)
> project
> > source tree.  Then I configure the jena-maven-tools plugin to run during
> my maven build,
> > to cleanly generate updated java constants (in an output .java file).
> These constants
> > are in turn used during later stages of the build.  Very tidy!
> >
> > But there is trouble in the wind.  I summarize what I have read about
> the apparent
> > demise of this maven plugin in items 1A, 1B, 1C below.  Then in item #2
> I describe
> > my own situation, listing some possible courses of action in 2A, 2B, 2C.
> >
> > 1A) There have been two mentions of 'jena-maven-tools' on the jena-users
> list in the last two years:
> >
> https://lists.apache.org/list.html?users@jena.apache.org:lte=2y:jena-maven-tools
> >
> > 1B) One of those mentions is this announcment from Andy S., on
> 2018/12/14 entitled
> > "Retiring Jena modules", where he said "we are looking at the status of:
> > jena-maven-tools - command line schemagen is in jena-cmds."
> >
> > Obviously this was my BEST chance to speak up, but I tarried too long:
> >
> > https://markmail.org/message/v6i3d5ktqx5j6teb
> >
> > 1C) The other mention is the 3.13.0 release announcement on 2019/09/29,
> > which included this fearsome notice:
> >
> > JENA-1760: Retire jena-maven-tools
> https://issues.apache.org/jira/browse/JENA-1760
> >
> > https://markmail.org/message/tidlnyfivbr4xumj
> >
> > ==
> >
> > 2) Currently I use jena-mvn-tools at version 3.6.0 (due for upgrade!)
> despite several bugs in
> > the maven plugin implementation.  I have used the jena-maven-tools in
> several past and current
> > projects.  I have a pragmatic interest in seeing jena-maven-tools
> continued and improved, or
> > replaced with something better.  Upon request I am happy to report what
> I know about the extant
> > bugs (in the jena-maven-tools plugin).  But as noted in 1A,1B,1C above,
> the jena-maven-tools
> > are currently on their way out of the main jena codebase.  So perhaps I
> will wind up fixing these
> > bugs myself, in a fork, or...?
> >
> > Option 2A) I formally report the bugs, and we collaborate to get them
> fixed, resulting in an
> > eventual pull request that re-establishes an improved version of
> jena-maven-tools.
> >
> > Option 2B)  If the jena team prefers to be rid of this troublesome
> plugin, it may be
> > best for me to publish a new artifact from one of my own projects.   I
> could fork the
> > code from jena-maven-tools, and eventually publish it somewhere.  I am
> guessing it would
> > take 6+ months, depending on how much support + interest we get from the
> community.
> > As illustration of how I think about this tool, here is an example where
> we used a previous
> > version of jena-maven-tools (0.8) together with 'rdf-reactor' (which is
> also falling into disuse by ...
> > everyone but me?).
> >
> >
> https://app.assembla.com/spaces/cogchar/subversion/source/HEAD/trunk/maven/org.cogchar.lib.onto/pom.xml
> >
> > Option 2C)  Perhaps ther

Re: Access control on Jena/fuseki datasets

2019-11-24 Thread Claude Warren
If you want to restrict access to datasets alone you can probably do that
in Fuseki.  If you want to grant access to specific models within a dataset
you will probably need to use the Permissions layer.
The permissions layer will allow you to restrict access to graphs or even
down to the triple level.

Restricting access to models in a dataset using Shiro would be a fairly
straight forward extension of the ShiroExampleEvaluator to map users to the
models they can see.

Claude

On Fri, Nov 22, 2019 at 4:41 PM Jean-Claude Moissinac <
jean-claude.moissi...@telecom-paristech.fr> wrote:

> Dear Marco,
>
> I think my previous reading of this documentation was right.
> My understanding is that the proposed solution is to develop specific Java
> code (like the ShiroExampleEvaluator) to implement  the permissions.
> I would like just to configure and use fuseki, not start a Java development
> I doesn't see clearly , by doing such code,
> * if i get something more efficient than what I do with shiro, following
> the documentation here
> https://jena.apache.org/documentation/fuseki2/fuseki-security.html
>
> * if I will be able to manage correctly the user interface while having
> some free datasets and some  protected dataset
> now, a window to enter a login/pwd is always displayed when I call the user
> interface, so I'm not able to give a free access to free datasets
> through the user interface
> In the section [urls] of shiro.ini, I have the following line to access the
> user interface
> / = anon
>
>
>
>
> --
> Jean-Claude Moissinac
>
>
>
> Le jeu. 21 nov. 2019 à 16:05, Marco Neumann  a
> écrit :
>
> > please take a look at
> >
> > https://jena.apache.org/documentation/permissions/index.html
> >
> >
> > On Thu 21. Nov 2019 at 14:00, Jean-Claude Moissinac <
> > jean-claude.moissi...@telecom-paristech.fr> wrote:
> >
> > > Hello
> > >
> > > I would like to give free access to some datasets in my fuseki server
> and
> > > control access to other datasets.
> > > With shiro, I'm able to control the sparql access points like
> > > https://myserver/dm/sparql
> > > but I'm not able to give a controlled access to the datasets user
> > interface
> > > https://myserver/dataset.html?tab=query&ds=/controlleddataset
> > > or
> > > https://myserver/dataset.html?tab=query&ds=/freedataset
> > > or
> > > https://myserver/
> > >
> > > Is there some good practices about the access control in fuseki
> > instances?
> > >
> > > Thank's in advance for any advice
> > > --
> > > Jean-Claude Moissinac
> > >
> > --
> >
> >
> > ---
> > Marco Neumann
> > KONA
> >
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Analyzing SPARQL Queries

2019-10-07 Thread Claude Warren
Bart,

I was referring to the querybuilder source code.

On Mon, Oct 7, 2019 at 11:53 AM Bart van Leeuwen 
wrote:

> Claude,
>
> This sounds useful, are you referring to the actual Jena source code for
> your examples?
>
> Met Vriendelijke Groet / With Kind Regards
> Bart van Leeuwen
>
>
> twitter: @semanticfire
> tel. +31(0)6-53182997
> Netage B.V.
> http://netage.nl
> Esdoornstraat 3
> 3461ER Linschoten
> The Netherlands
>
>
>
>
> From:Claude Warren 
> To:users@jena.apache.org
> Date:07-10-2019 12:47
> Subject:Re: Analyzing SPARQL Queries
> --
>
>
>
> Bart,
>
> Not sure exactly what you are trying to do or actually looking for but,
> assuming you have a parsed query and you want to detect the
>  pattern so you can state that ?t is a class you could use
> the ElementVisitor pattern.  See the query builder WhereHandler.build()
> method as well as the BuildElementVisitor class for examples of how to
> traverse the where statements.  You will probably want to override the
> `public void visit(ElementTriplesBlock el)` and `public void
> visit(ElementPathBlock el)` methods.
>
> You should also be able to do the second inference as well.
>
> Claude
>
>
> On Mon, Oct 7, 2019 at 11:01 AM Bart van Leeuwen <
> bart_van_leeu...@netage.nl>
> wrote:
>
> > Adrian,
> >
> > I'm looking for code examples specifically Jena, don't see any references
> > here.
> >
> > Met Vriendelijke Groet / With Kind Regards
> > Bart van Leeuwen
> >
> >
> > twitter: @semanticfire
> > tel. +31(0)6-53182997
> > Netage B.V.
> > http://netage.nl
> > Esdoornstraat 3
> > 3461ER Linschoten
> > The Netherlands
> >
> >
> >
> >
> > From:Adrian Walker 
> > To:users@jena.apache.org
> > Date:07-10-2019 05:48
> > Subject:Re: Analyzing SPARQL Queries
> > --
> >
> >
> >
> > Bart,
> > This might be useful --
> > www.executable-english.com/demo_agents/RDFQueryLangComparison1.agent
> >
> >Cheers,  -- Adrian
> >
> > Adrian Walker
> > Executable English LLC
> > San Jose, CA, USA
> > (USA) 860 830 2085 (California time)
> > www.executable-english.com
> >
> > On Sun, Oct 6, 2019 at 6:13 PM Bart van Leeuwen <
> > bart_van_leeu...@netage.nl>
> > wrote:
> >
> > > Hi,
> > >
> > > I'm looking for some examples to analyze a sparql query e.g.
> > >
> > > select * where { ?s a ?t . ?s <http://example.com#b> ?x }
> > >
> > > I would like to be able to infer that ?t is rdf:class
> > >
> > > and that ?x is related to ?s by ex#b
> > >
> > > I've looked at Query.getQueryPattern() and the walker but couldn't
> find a
> > > nice example of how to do this.
> > >
> > > Met Vriendelijke Groet / With Kind Regards
> > > Bart van Leeuwen
> > >
> > >
> > > twitter: @semanticfire
> > > tel. +31(0)6-53182997
> > > Netage B.V.
> > > http://netage.nl
> > > Esdoornstraat 3
> > > 3461ER Linschoten
> > > The Netherlands
> > >
> >
> >
> >
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>
>
>

-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Analyzing SPARQL Queries

2019-10-07 Thread Claude Warren
Bart,

Not sure exactly what you are trying to do or actually looking for but,
assuming you have a parsed query and you want to detect the
 pattern so you can state that ?t is a class you could use
the ElementVisitor pattern.  See the query builder WhereHandler.build()
method as well as the BuildElementVisitor class for examples of how to
traverse the where statements.  You will probably want to override the
`public void visit(ElementTriplesBlock el)` and `public void
visit(ElementPathBlock el)` methods.

You should also be able to do the second inference as well.

Claude


On Mon, Oct 7, 2019 at 11:01 AM Bart van Leeuwen 
wrote:

> Adrian,
>
> I'm looking for code examples specifically Jena, don't see any references
> here.
>
> Met Vriendelijke Groet / With Kind Regards
> Bart van Leeuwen
>
>
> twitter: @semanticfire
> tel. +31(0)6-53182997
> Netage B.V.
> http://netage.nl
> Esdoornstraat 3
> 3461ER Linschoten
> The Netherlands
>
>
>
>
> From:Adrian Walker 
> To:users@jena.apache.org
> Date:07-10-2019 05:48
> Subject:Re: Analyzing SPARQL Queries
> --
>
>
>
> Bart,
> This might be useful --
> www.executable-english.com/demo_agents/RDFQueryLangComparison1.agent
>
>Cheers,  -- Adrian
>
> Adrian Walker
> Executable English LLC
> San Jose, CA, USA
> (USA) 860 830 2085 (California time)
> www.executable-english.com
>
> On Sun, Oct 6, 2019 at 6:13 PM Bart van Leeuwen <
> bart_van_leeu...@netage.nl>
> wrote:
>
> > Hi,
> >
> > I'm looking for some examples to analyze a sparql query e.g.
> >
> > select * where { ?s a ?t . ?s  ?x }
> >
> > I would like to be able to infer that ?t is rdf:class
> >
> > and that ?x is related to ?s by ex#b
> >
> > I've looked at Query.getQueryPattern() and the walker but couldn't find a
> > nice example of how to do this.
> >
> > Met Vriendelijke Groet / With Kind Regards
> > Bart van Leeuwen
> >
> >
> > twitter: @semanticfire
> > tel. +31(0)6-53182997
> > Netage B.V.
> > http://netage.nl
> > Esdoornstraat 3
> > 3461ER Linschoten
> > The Netherlands
> >
>
>
>

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: [UnionModel] question about unexpected behaviour

2019-09-30 Thread Claude Warren
Barry,

You could create a union graph that is used for an InfModel.  Not sure why
you want a UnionGraph but ...

I think you want to start with a TDB Model and wrap that as a SecuredModel
(See comments in permissions re: SecuredModel vs SecuredGraph)

Using a SecuredModel as the basis for an InfModel is probably not what you
want to do.  The InfModel will create new triples based on the data it
reads from the underlying graph.  Different users will have different
security model results (there really is not a language to describe this).
So the inference engine for the first user will be correct for that user,
but if the second user is a user with less access the inference engine will
contain reasoned triples that the second user should not see because they
can not see the underlying data.

One way you might resolve this is to make the InfModel a secured model and
perform checks based on the reasoning you add.  This is not a space I have
explored so I fear you are on your own here.  However, there were a couple
of papers about 4 years ago concerning different approaches to securing
graphs.  There might be something in there about how to do this in a way
that will actually work.

On the other hand, if you have a limited number of groups (N) you could use
the SecuredModel to drive N different datasets in Fuseki and let Fuseki
determine which graph(s) a user can see.  In this case you would have N
InfModels running against the same underlying TDB model.  Each InfModel
would write its derived data to its own graph.  This solution will have
problem if you need to update the underlying TDB model.

While I don't have time to properly investigate this question, if you want
to explore it I would be happy to provide pointers and perhaps we can
develop an extension to work with the InfModel.

Claude

On Mon, Sep 30, 2019 at 3:52 PM Nouwt, B. (Barry)
 wrote:

> Hi Claude,
>
> Thanks for your analysis! It was (of course) not my intention to get a
> dataset with 4 graphs 😊, but is sure explains the strange behavior.
>
> I wanted to replicate the behavior of defaultUnionGraph option that is
> available in TDB. The reason I cannot use TDB's defaultUnionGraph option,
> is because in that case I do not have access to the actual default graph
> object. I need this object, because I want to wrap it with a SecuredModel
> and an InfModel.
>
> So, I want to combine the defaultUnionGraph behavior of TDB and the
> SecuredModel with a custom SecurityEvaluator from Jena Permissions and the
> InfModel with the GenericRuleReasoner. This was my attempt at doing so, but
> apparently it does not work. Do you think it is possible to construct such
> a dataset via an Assembler .ttl file?
>
> Regards, Barry
>
> -Original Message-
> From: Claude Warren 
> Sent: maandag 30 september 2019 16:33
> To: users@jena.apache.org
> Subject: Re: [UnionModel] question about unexpected behaviour
>
> Barry,
>
> You create a dataset that is comprised of the following:
>
> defaultGraph -> unionModel
> namedGraph(https://www.tno.nl/agrifood/graph/pizza/data)
> namedGraph(https://www.tno.nl/agrifood/graph/pizza/onto)
> <https://www.tno.nl/agrifood/graph/pizza/onto>
>
>
> https://www.tno.nl/agrifood/graph/pizza/data is loaded from
> file:src/main/resources/dummy1.ttl
> https://www.tno.nl/agrifood/graph/pizza/onto is loaded from
> file:src/main/resources/dummy2.ttl
>
> defaultGraph is the union of two graphs
> defaultGraph.root is loaded from file:src/main/resources/dummy1.ttl
> defaultGraph.sub1 is loaded from file:src/main/resources/dummy2.ttl
>
> writing triples into the dataset will write to the defaultGraph.  The
> defaultGraph being a uniongraph will write to the first graph
> (default.Graph.root).  While they are initialized with the same data
> (file:src/main/resources/dummy1.ttl), this is not the same as the
> https://www.tno.nl/agrifood/graph/pizza/data  named graph.
>
> Your example basically has 4 graphs, 2 in defaultGraph and 2 named graphs.
>
> Claude
>
> On Fri, Sep 27, 2019 at 3:57 PM Nouwt, B. (Barry)
>  wrote:
>
> > Hi Claude, thanks for the explanation: from the dataset-as-quads
> > perspective it does indeed sound logical that empty graphs do not exist.
> >
> > Regarding your second answer, I've added some additional information
> > below (i.e. these can also be found on the github repo:
> > https://github.com/barrynl/jena-example). The SPARQL is executed on
> > the dataset described by the conf.ttl below. It does not have a GRAPH
> > specified in the query and it would indeed go into the default graph,
> > which is a UnionModel of the two in-memory named graphs. Of which the
> > rootGraph should store this new data. The details of how I inspect the
> > different gra

Re: [UnionModel] question about unexpected behaviour

2019-09-30 Thread Claude Warren
Barry,

You create a dataset that is comprised of the following:

defaultGraph -> unionModel
namedGraph(https://www.tno.nl/agrifood/graph/pizza/data)
namedGraph(https://www.tno.nl/agrifood/graph/pizza/onto)
<https://www.tno.nl/agrifood/graph/pizza/onto>


https://www.tno.nl/agrifood/graph/pizza/data is loaded from
file:src/main/resources/dummy1.ttl
https://www.tno.nl/agrifood/graph/pizza/onto is loaded from
file:src/main/resources/dummy2.ttl

defaultGraph is the union of two graphs
defaultGraph.root is loaded from file:src/main/resources/dummy1.ttl
defaultGraph.sub1 is loaded from file:src/main/resources/dummy2.ttl

writing triples into the dataset will write to the defaultGraph.  The
defaultGraph being a uniongraph will write to the first graph
(default.Graph.root).  While they are initialized with the same data
(file:src/main/resources/dummy1.ttl), this is not the same as the
https://www.tno.nl/agrifood/graph/pizza/data  named graph.

Your example basically has 4 graphs, 2 in defaultGraph and 2 named graphs.

Claude

On Fri, Sep 27, 2019 at 3:57 PM Nouwt, B. (Barry)
 wrote:

> Hi Claude, thanks for the explanation: from the dataset-as-quads
> perspective it does indeed sound logical that empty graphs do not exist.
>
> Regarding your second answer, I've added some additional information below
> (i.e. these can also be found on the github repo:
> https://github.com/barrynl/jena-example). The SPARQL is executed on the
> dataset described by the conf.ttl below. It does not have a GRAPH specified
> in the query and it would indeed go into the default graph, which is a
> UnionModel of the two in-memory named graphs. Of which the rootGraph should
> store this new data. The details of how I inspect the different graphs can
> be found in the Java code in the link above. If you run this example and
> inspect the output log and 'dataset' variable (and the inner models and
> graphs) with a debugger, the data sometimes shows up and sometimes doesn't.
> I am trying to understand why... Any more ideas?
>
> Regards, Barry
>
>  SPARQL
> 
> INSERT DATA {
> :test rdf:type pizza:Pizza .
> :test pizza:hasCountryOfOrigin <
> http://192.168.99.100/pizza/Italy> .
> :test pizza:hasBase <http://192.168.99.100/pizza/23_base>
> .
> :test pizza:hasSpiciness <
> http://192.168.99.100/pizza/23_flauw> .
> :test pizza:hasTopping :18-Tomaat .
> :test pizza:hasTopping :18-mozzerella .
> :test pizza:hasTopping :18-rundergehakt .
> }
>
>  conf.ttl
> 
>
> @prefix : <#> .
> @prefix fuseki: <http://jena.apache.org/fuseki#<
> http://jena.apache.org/fuseki>> .
> @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#<
> http://www.w3.org/1999/02/22-rdf-syntax-ns>> .
> @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#<
> http://www.w3.org/2000/01/rdf-schema>> .
> @prefix ja: <http://jena.hpl.hp.com/2005/11/Assembler#<
> http://jena.hpl.hp.com/2005/11/Assembler>> .
> @prefix xsd: <http://www.w3.org/2001/XMLSchema#<
> http://www.w3.org/2001/XMLSchema>>  .
>
> :service1 rdf:type fuseki:Service ;
> fuseki:name "test" ;
> fuseki:serviceQuery "query" ; # SPARQL query service
> fuseki:serviceUpdate "update" ;
> fuseki:serviceUpload "upload" ; # Non-SPARQL upload service
> fuseki:serviceReadWriteGraphStore "data" ; # SPARQL Graph store
> protocol (read and write)
> # A separate read-only graph store endpoint:
> fuseki:serviceReadGraphStore "get" ; # SPARQL Graph store protocol
> (read only)
> fuseki:dataset :dataset .
>
> :dataset rdf:type ja:RDFDataset ;
> ja:defaultGraph :unionModel ;
> ja:namedGraph
> [ ja:graphName  <https://www.tno.nl/agrifood/graph/pizza/data>
> ;
>   ja:graph  :itemGraph ] ;
> ja:namedGraph
> [ ja:graphName  <https://www.tno.nl/agrifood/graph/pizza/onto>
> ;
>       ja:graph  :ontoGraph ]
>  .
>
> :unionModel rdf:type ja:UnionModel ;
> ja:rootModel :itemGraph ;
> ja:subModel :ontoGraph .
>
> :itemGraph rdf:type ja:MemoryModel ;
> ja:content [ ja:externalContent
>  ] .
>
> :ontoGraph rdf:type ja:MemoryModel ;
> ja:content [ ja:externalContent
>  ] .
>
>
>
>
> -Original Message-
> From: Claude Warren 
> Sent: donderdag 26 september 

Re: [UnionModel] question about unexpected behaviour

2019-09-26 Thread Claude Warren
Empty graphs in Fuseki simply don't exist.  They no more exist than a
predicate that has not been used.  Named graphs only come into existence
when there is at least one triple added to it.  If you think about datasets
as collections of quads (   ) then you can see that the dataset
can only locate graphs that have data.  The only other option is that an
infinite number of graphs exist, in which case listing all the graphs would
be impossible.  I am sure there is a deep philosophical discussion here but
the Jena  rule is that if the graph does not have data it does not exist.

As for your second question, you did not give an example of how you
inserted the data.  If you did not provide a graph name to insert into then
the data would go into the default graph named "urn:x-arq:DefaultGraph"
(defined in org.apache.jena.sparql.core.Quad.defaultGraphIRI).  If you did
provide the graph name then the data would be inserted into that graph as
your debugging shows.  When you say it does not exist in the graph are you
referring to on disk?  How did you write the data back out?  The UnionModel
depends on the underlying models to preserve their data.  If those models
are in-memory models then they will not be written back to disk, if they
are TDB models (or other similar auto storing models) then they should  be
written back to the underlying storage.  This assumes that all transactions
and other issues are correctly handled.

Claude



On Thu, Sep 26, 2019 at 10:05 AM Nouwt, B. (Barry)
 wrote:

> Hi all,
>
> I am trying to get the Apache Jena UnionModel working for my scenario, but
> I keep encountering unexpected behavior. I’ve set up a minimal,
> free-standing example on github in this repository (clone the git repo,
> configure Maven in your IDE and execute the main() method, I.e. it probably
> does not run from the .jar):
>
>  https://github.com/barrynl/jena-example
>
>
>
> I have two questions related to this example:
>
> My first question is related to empty graphs in Jena Fuseki. If I load the
> conf.ttl (from the git repo above) at startup in Apache Jena Fuseki without
> the dummy data in the two named graphs, the two graphs seem to disappear (I
> think this does not happen in my Java Example). I read somewhere that empty
> graphs are automatically deleted in Fuseki to prevent old graphs from
> showing up in the Fuseki interface. For my use case I would like to
> configure two (possibly empty at first) named graphs via a conf.ttl and be
> able to store data in them afterwards. Is this possible?
>
> My second question is about the demonstraded behaviour of the Java example
> in the git repository above. I load a dataset with a UnionModel from a
> conf.ttl file and insert new data in it via SPARQL. Although the new data
> shows up in the select query, it does not show up in either of the two
> names graphs the UnionModel consists of. My question is: where is this data
> stored if not in one of the names graphs? (If I inspect the UnionModel via
> a debugger, the inserted data DOES show up in the correct named graph)
>
> Hopefully someone can shed some light on this behaviour. Thanks in advance!
>
> Regards, Barry
> This message may contain information that is not intended for you. If you
> are not the addressee or if this message was sent to you by mistake, you
> are requested to inform the sender and delete the message. TNO accepts no
> liability for the content of this e-mail, for the manner in which you use
> it and for damage of any kind resulting from the risks inherent to the
> electronic transmission of messages.
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Ontology mapping

2019-09-26 Thread Claude Warren
Elio,

I worked on the Granatum project where I built the query engine.  We did
much the same thing that you are trying to do.  Our solution was to create
a vocabulary that the application would use and map that vocabulary to the
vocabularies of the SPARQL endpoints we were querying. The solution
comprised two major components:

1) a "roadmap" that described how a term in one vocabulary was related to a
term in another vocabulary.  We did this as we did not want to modify the
vocabularies of the endpoints themselves as that changes the meaning.

2) a query engine that took a query in our vocabulary, determined which
endpoints might have which data components, build sub queries for each
endpoint in the vocabulary of the endpoint and execute all of those in a
single query with multiple federated queries.

This works for a great proportion of the queries in the wild.  However,
there are many edge cases that fall through the cracks.  BioFed is another
strategy, one of several.  There are multiple papers on which ones perform
the best in which situations.  I would suggest taking a look at the two
papers I noted before and check the other papers/projects of the
co-authors.  You will find a number of studies, example code, and in some
cases libraries that may help you solve your problem.

Let me know if I can be of any assistance,
Claude

On Thu, Sep 26, 2019 at 8:43 AM elio hbeich  wrote:

> hello,
>
> I am trying to federate multiple ontologies by adding mapping rules on each
> one of them.
> by doing so, I  keep them independent but at the same time I can query them
> both.
>
> Best regards,
> Elio HBEICH
>
> On Wed, Sep 25, 2019 at 5:27 PM Claude Warren  wrote:
>
> > I am not certain exactly what you are asking.  Are you asking how to
> create
> > an ontology that maps different names for the same concept together (eg.
> > molecular weight -vs- compound weight) so that you can query across them?
> >
> > If so the only examples I know of are
> >
> > BioFed:
> >
> >
> https://www.researchgate.net/publication/315120429_BioFed_Federated_query_processing_over_life_sciences_linked_open_data
> >
> > and
> >
> > Granatum:
> >
> >
> https://www.researchgate.net/publication/301952979_Linked_Biomedical_Dataspace_Lessons_Learned_Integrating_Data_for_Drug_Discovery
> >
> > Hope this helps
> >
> > Claude
> >
> > On Wed, Sep 25, 2019 at 1:04 PM elio hbeich 
> wrote:
> >
> > > Dear All,
> > >
> > > I am searching to connect and map multiple ontologies.
> > > Does anyone have any tool recommendations?
> > >
> > > Thank you in advance.
> > > Best regards,
> > > Elio HBEICH
> > >
> >
> >
> > --
> > I like: Like Like - The likeliest place on the web
> > <http://like-like.xenei.com>
> > LinkedIn: http://www.linkedin.com/in/claudewarren
> >
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Fuseki vs Rya

2019-09-25 Thread Claude Warren
Based on a single talk at ApacheConNA by Adina Crainiceanu there is not
much difference in functionality, though Rya does not fully implement
SPARQL 1.1, it probably does enough to work for most projects.  Rya does do
some interesting things with data sketches (
http://incubator.apache.org/projects/datasketches.html) like methods to
speed up processing.  It is implemented on Accumulo and MongoDB.

Jena on the other hand is a more mature project and has years of testing
and bug fixing behind it.  It is implemented on several storage layers
(native, SQL, Cassandra) and provides easy extension points to implement
other storage strategies and integrate them into the query engine.

I suspect they both have their place and that it is a matter of what is
most important to your project.  Only you can determine which trade-offs
are important to your project.

Claude

On Wed, Sep 25, 2019 at 5:25 AM Laura Morales  wrote:

> Now that Rya has been promoted to top-level project, I'd like to hear your
> comments about Fuseki vs Rya. Pros&Cons of both, when and why I should use
> one or the other. Thanks!
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Ontology mapping

2019-09-25 Thread Claude Warren
I am not certain exactly what you are asking.  Are you asking how to create
an ontology that maps different names for the same concept together (eg.
molecular weight -vs- compound weight) so that you can query across them?

If so the only examples I know of are

BioFed:
https://www.researchgate.net/publication/315120429_BioFed_Federated_query_processing_over_life_sciences_linked_open_data

and

Granatum:
https://www.researchgate.net/publication/301952979_Linked_Biomedical_Dataspace_Lessons_Learned_Integrating_Data_for_Drug_Discovery

Hope this helps

Claude

On Wed, Sep 25, 2019 at 1:04 PM elio hbeich  wrote:

> Dear All,
>
> I am searching to connect and map multiple ontologies.
> Does anyone have any tool recommendations?
>
> Thank you in advance.
> Best regards,
> Elio HBEICH
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: How to update a remote repository from a Java client...?

2019-09-20 Thread Claude Warren
Using the UpdateBuilder and RDFConnection can make this much easier (i.e.
no missing '<' )

On Thu, Sep 19, 2019 at 11:23 PM Andy Seaborne  wrote:

>
>
> On 19/09/2019 15:23, Mark Hissink Muller wrote:
> > String postUrl =
> > "http://testserver:8080/rdf4j-server/repositories/POST_REPO";
> >
> >  UpdateRequest request = UpdateFactory.create();
>
> Normally,
>
> UpdateRequest request = UpdateFactory.create(string);
>
> >
> >  request.add("DROP GRAPH ")
> >
> >  .add("CREATE GRAPH http://myorg.dk/something>");
>
> There's a syntax error there - missing < in the URI.
>
> >
> >  // .add("LOAD  INTO
> > ") ;
> >
> >  request.setBaseURI(postUrl);
> Does not do anything.  Base URIs are the "BASE" feature of parsing.
>
> >
> >  Dataset dataset = DatasetFactory.create();
> >
> >  dataset.setDefaultModel(customModel.getModel());
> >
> >  UpdateAction.execute(request, dataset);
>
> So we have a local Dataset in-memory, something in the default model, no
> named graphs.
>
>
> DROP GRAPH ;
>
> That does not do anything because there are no triples in the graph
> named 
>
>
> CREATE GRAPH 
>
> That does not do anything either. Graphs get created as needed, no need
> to create them in this kind of dataset.
>
> >  log.info("This point is reached without a problem.");
>
> The syntax error would have happened.
>
> And if the code exits, the in memory dataset is lost.
>
> Did you mean to send it to the sever?
>
> UpdateExecutionFactory.createRemote
>
> 
>
> There is also RDFConnection which pulls together all SPARQL operations.
>
>  Andy
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Parsing RDFNode.toString()-s (back) to RDFNodes -- or parsing GROUP_CONCAT to RDFLists

2019-09-05 Thread Claude Warren
I am not certain, but I think that the QueryBuilder AbstractQueryBuilder
class has static methods that take objects and create nodes from them.  I
think that passing the string representation the '<' and '>' prefix and
suffix will work for URLs.  If you have names like dc:name and "dc"  is
prefixmapped to dublin core than that would also be parsed if you pass the
prefix map.  Anyway, take a look at the querybuilder to see if will do what
you want.

Claude

On Wed, Sep 4, 2019 at 10:42 AM Martin G. Skjæveland <
m.g.skjaevel...@gmail.com> wrote:

> Hi all,
>
> in my application there is special support for lists (without going into
> further detail), and I would like to be able to have SPARQL queries that
> return lists. Since this is not supported in SPARQL, my idea is to
> exploit and consider GROUP_CONCAT "columns" in SPARQL result sets as
> lists and split and parse these part of the split to an RDFList (of
> RDFNodes).
>
> Does this sound reasonable? Is there parsing functionality in Jena to
> handle this already? Perhaps there is something like  RDFNode
> parse(String)  which parses strings on the same format as
> RDFNode.toString() would produce back to an RDFNode?
>
> Thanks!
>
> Martin
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: StreamRDF.base()

2019-08-15 Thread Claude Warren
Learn something new every day...

On Wed, Aug 14, 2019 at 5:44 PM Andy Seaborne  wrote:

>
>
>
> (and it is technically wrong to have a # in the base)
>
>
so as a base "http://example.com/myfile.txt#"; is incorrect but "
http://example.com/myfile.txt/"; is correct?

or technically does the last segment of the base need to be an NCName[1]?
in which case "http://example.com/myfile.txt"; but not "
http://example.com/myfile.txt#"; or "http://example.com/myfile.txt/";

How does one create a technically correct base that will convert

to

or


?  Or is that just not possible?

Thx,
Claude


[1] https://www.w3.org/TR/xml-names/#NT-NCName
-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: StreamRDF.base()

2019-08-14 Thread Claude Warren
The StreamRDF just passed the base() argument on to the sink so that the
sink would have the base in order to set create the FQ URI from local
URIs.

StreamRDFLib simply ignores the base() call.  I assume this is because it
is intended to process fully qualified RDF.

I think the assumption is that if you are streaming into the graph you
would need the base() to resolve any inbound local URIs while if you are
streaming out from the graph the URIs are already fully resolved.

I didn't write this code so I am not certain but if that is the case
perhaps we should note it in the javadocs.

I do note that StreamRDF says it is for output, in which case I am not
certain why the base() is needed at all.

Claude



On Tue, Aug 13, 2019 at 11:46 PM Martynas Jusevičius 
wrote:

> Hi,
>
> I'm trying to understand what the purpose/usage of StreamRDF.base() is.
>
> Isn't it supposed to set the base URI that relative URIs in the stream
> resolve against?
>
> I've made a simple test:
>
> StreamRDF rdfStream = StreamRDFLib.writer(new BufferedWriter(new
> OutputStreamWriter(System.out)));
> rdfStream.start();
> rdfStream.base("http://localhost/";);
> rdfStream.triple(new Triple(NodeFactory.createBlankNode(),
> NodeFactory.createURI("relative"), NodeFactory.createBlankNode()));
> rdfStream.finish();
>
> The output I get:
>
> _:Bf410fc50X2De0baX2D464eX2D996eX2Dbb3207090baa 
> _:B4b65b796X2D3561X2D4bf3X2Dbf31X2D1154aac0c816 .
>
> Why is the property URI  and not ?
> Doesn't that make the output invalid N-Triples? Or am I writing it wrong?
>
> Martynas
> atomgraph.com
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: InfModels in Dataset on Fuseki.

2019-07-15 Thread Claude Warren
After more digging I came across the GraphMaker interface in the
DatasetGraphFactory.  Since what I want to do is reason over a TDB graph
what I am planning on doing is

1) creating a SoftDatasetGraphMap (DatasetGraph) that will discard the
datasets when garbage collection needs the space.
2) create a GraphMaker that wraps a Dataset (TDB in my case) and will pull
grapsh from the underlying dataset.
3) Given that I have a reasoner figure out how to apply the reasoner to a
specifc graph in the TDB dataset and return the resulting InfModel.  That
model will be placed in the SoftDatasetGraphMap.
4) Use the SoftDatasetGraphMap to construct the Dataset that Fuseki will
serve.

I am hoping that this strategy will allow me to create new named graphs in
TDB and when they are served  by Fuseki have them be reasoned over.

In addition, when the graph is no longer being used (and memory is at a
premium) the graph can be discarded as the reasoner can be recreated from
the underlying TDB.

Thoughts?

Claude

On Mon, Jul 15, 2019 at 11:57 AM Andy Seaborne  wrote:

> Hi Claude,
>
> On 13/07/2019 22:26, Claude Warren wrote:
> > Greetings,
> >
> > I know that Fuseki serves datasets.
> >
> > I know that the RDFConnecton will use Fuseki as its storage.
> >
> > I know that the RDFConnection will create named graphs if they are in the
> > dataset in the connection.loadDataset( dataset ) call.
> >
> > I know that if RDFConnection is talking to Fuseki it will create the
> named
> > graphs in the dataset on Fuseki.
> >
> > I have a rulesReasoner and a schema and know how to make and InfModel
> from
> > it and a schema.
> >
> > What I don't know is how to get Fuseki to create the new named graphs as
> > InfModels when created via the RDFConnection.
> >
> > Is this possible?
>
> An InfModel within a dataset needs specific setup in the configuration
> file and that isn't accessible via RDFConnection.
>
> The setup with inference uses the general linked-graph dataset
> implementation so there is nothing to stop its configuration while
> running.  It is specific to that kind of dataset.
>
> RDFConnection isn't the right place - it is more like a server
> administration function.  Being able to upload a new, replacement
> dataset configuration could be done (and avoid a server restart).  Only
> (?? I think - it's been a long time since Ii looked) new databases can
> be created by configuration upload.
>
> >
> > Additionally is it possible to have the DeductionsModel maintained in a
> TDB
> > datastore not in-memory as the documentation states?
>
> Don't know - sorry.
>
> > I figure I can put the schema in the dataset with the other named graphs
> so
> > that it is not in memory either.
> >
> > Any help would be appreciated.
> >
> > Claude
> >
> >
> >
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


InfModels in Dataset on Fuseki.

2019-07-13 Thread Claude Warren
Greetings,

I know that Fuseki serves datasets.

I know that the RDFConnecton will use Fuseki as its storage.

I know that the RDFConnection will create named graphs if they are in the
dataset in the connection.loadDataset( dataset ) call.

I know that if RDFConnection is talking to Fuseki it will create the named
graphs in the dataset on Fuseki.

I have a rulesReasoner and a schema and know how to make and InfModel from
it and a schema.

What I don't know is how to get Fuseki to create the new named graphs as
InfModels when created via the RDFConnection.

Is this possible?

Additionally is it possible to have the DeductionsModel maintained in a TDB
datastore not in-memory as the documentation states?

I figure I can put the schema in the dataset with the other named graphs so
that it is not in memory either.

Any help would be appreciated.

Claude



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: XML vs RDF namespaces

2019-07-05 Thread Claude Warren
OK.  after further examination the issue I am seeing is that the TURTLE
representation and the RDF-XML representation do not match.

If I read the XML document and serialize it out as RDF-XML get the expected
namespaces.  When I serialize it out as TURTLE there is no separator
between the namespace and the local name.  I can see that this is as
expected but means that once serialized to TURTLE some information is lost
in that Jena calculates the namespace to a different value.  I know that
this is due to the way namespaces are handled in Jena vs XML in that Jena
always carries the expanded version and the expansion of  http://example.com"; /> technically is <http://example.comitem /> but it
does make for strange reading.

i was wondering if it would make sense to have a utility/option that would
insert either '#' or '/' on RDF-XML namespaces that end with an valid
XMLName character.  But then the name is different across the systems.  So
I suppose that really is a bad idea.

Thanks for your time,
Claude

On Fri, Jul 5, 2019 at 10:00 AM Andy Seaborne  wrote:

> On 05/07/2019 08:20, Martynas Jusevičius wrote:
> >> When XML parser parses the document it internally adds a slash '/'
> > between  the namespace and  the local name.
> >
> > Are you sure that is the case? Sounds weird and likely non-conformant.
>
> It's possibly because one of the processors process has also applied
> "normalization" - it is not to do with namespaces, it is to do with URIs.
>
> > If you can’t change the existing XSLT stylesheet, you could pipeline the
> > RDF/XML through a second one which removes the slashes.
> >
> > This has little to do with Jena really.
>
> I agree.  Looks liek it is is the XML part.
>
> >
> > On Fri, 5 Jul 2019 at 09.59, Claude Warren  wrote:
> >
> >> Greetings,
> >>
> >> Background:
> >> I have an externally curated XML file that I run through an XSLT
> transform
> >> to create an RDF XML file to load into Jena.
> >>
> >> The namespace URIs in the XML  end with a valid XML nameChar (i,e, not a
> >> slash '/' or hash '#').  When XML parser parses the document it
> internally
> >> adds a slash '/' between  the namespace and  the local name.  Jena does
> >> not.
>
> In XML, qname is a pair (namespaceURI, localName).
>
> Core XML does not have the concept of making that a single string.
>
> The rule for turning that into an single URI is specific to RDF.
> The rule is concatenation.
>
> The RDF world used talk about qnames - that was sloppy language.
> Better is "prefixed name".  Turtle rdf:type is a short hand for a long
> URI. Not an XML qname.
>
> How in your XSLT process that becomes a single string is what you need
> to look at.
>
> Surely the XSLT script could perform URI processing?
>
> F&O talks about it (section 1.7.2) but there isn't an function that I
> could see.
>
> >> So:
> >> http://example.com"; />
> >>
> >> in the XML parser and XSL transformer become:
>
> So how is that happening? XSLT rule?
>
> >> <http://example.com/item />
> >> but in Jena becomes
>
> when parsing RDF/XML.
>
> If the namespace is used a prefix in Turtle the same happens. Notice the
> change in terminology namespace -> prefix.
>
> >> <http://example.comitem />
>
> Possibly because the XML parser or XSLT processor also normalized the
> URI or has a rule involving the formation of the URI string.
>
> Have a look at the place where the qname becomes a single string output.
>
> >>
> >> As the xml is externally curated I can not easily change the xmlns
> elements
> >> to contain the XML parser inserted trailing slash.
> >>
> >> The XSLT namespaces must match the XML namespaces to correctly function.
> >>
> >> Question:
> >>
> >> Is there a flag that can be used during parsing of XML formatted RDF
> data
> >> that will cause the system to insert the XML expected slash '/' ?
>
> If happening in the XML parser you are using, one possibility is to
> parse the XML, write out then feed into ARP. Or add an XSLT step.
> Anything to kick in the XML processing part.
>
> >>
> >> Claude
>
>  Andy
>
> >>
> >> --
> >> I like: Like Like - The likeliest place on the web
> >> <http://like-like.xenei.com>
> >> LinkedIn: http://www.linkedin.com/in/claudewarren
> >>
> >
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


XML vs RDF namespaces

2019-07-04 Thread Claude Warren
Greetings,

Background:
I have an externally curated XML file that I run through an XSLT transform
to create an RDF XML file to load into Jena.

The namespace URIs in the XML  end with a valid XML nameChar (i,e, not a
slash '/' or hash '#').  When XML parser parses the document it internally
adds a slash '/' between  the namespace and  the local name.  Jena does not.

So:
http://example.com"; />

in the XML parser and XSL transformer become:

but in Jena becomes


As the xml is externally curated I can not easily change the xmlns elements
to contain the XML parser inserted trailing slash.

The XSLT namespaces must match the XML namespaces to correctly function.

Question:

Is there a flag that can be used during parsing of XML formatted RDF data
that will cause the system to insert the XML expected slash '/' ?

Claude

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: UUID vs compact id

2019-07-02 Thread Claude Warren
I agree with Andy, but note that there is a URN specification for UUID (
https://tools.ietf.org/html/rfc4122)

So the URI would be urn:uuid:

I used this in a recent project.

Claude

On Tue, Jul 2, 2019 at 8:53 AM Andy Seaborne  wrote:

> I doubt one URI design vs another where it will make any observable
> difference.
>
>  Andy
>
> On 01/07/2019 18:50, Siddhesh Rane wrote:
> > You could save the UUID as a 128 bit number in the database.
> > Conversion between alphanumeric and byte encoded UUIDs can be done on
> > the fly.
> > This would be the most compact solution compared to text format.
> >
> > Regards
> > Siddhesh Rane
> >
> > On Mon, Jul 1, 2019 at 9:02 PM Mikael Pesonen
> >  wrote:
> >>
> >>
> >> We are now using UUIDs for resource ids, e.g.
> >> https://example.com/f0c6b590-0bd6-4c66-8872-f6a0f3aa33ac where id
> length
> >> is 38 characters.
> >>
> >> Would it be any better performance wise to use more compact id
> >> https://example.com/jgie3590roGvnfsjvGUEu using 21 alphanum characters
> >>
> >> UUID is a standard atleast and better supported in various systems.
> >>
> >> --
> >> Lingsoft - 30 years of Leading Language Management
> >>
> >> www.lingsoft.fi
> >>
> >> Speech Applications - Language Management - Translation - Reader's and
> Writer's Tools - Text Tools - E-books and M-books
> >>
> >> Mikael Pesonen
> >> System Engineer
> >>
> >> e-mail: mikael.peso...@lingsoft.fi
> >> Tel. +358 2 279 3300
> >>
> >> Time zone: GMT+2
> >>
> >> Helsinki Office
> >> Eteläranta 10
> >> FI-00130 Helsinki
> >> FINLAND
> >>
> >> Turku Office
> >> Kauppiaskatu 5 A
> >> FI-20100 Turku
> >> FINLAND
> >>
> >
> >
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Getting EnhGraph / ModelCom from Resource / Model ?

2019-04-20 Thread Claude Warren
There are some interesting issues with persistence annotations in a
graph (perhaps I should present a paper/talk on it) and there are 2
distinctly difference approaches between the two PA4RDF versions.  The
first assumes that the complete graph is available locally (i.e. not
on a remote server), the second does not make that assumption but then
tries to cache the remote data as it is needed and/or modified so that
it can submit proper updates to the remote.  There are issues here
with anonymous nodes (the Jena implementation provides a way around
those).  But I am not certain that the implementation is wholly
correct.

I was thinking about splitting the project into the annotations and 2
distinct implementations as noted above.  I have not had time to work
on the project recently, but if you are interested I am certain we
could work together to figure out how to make this fly.

Claude

On Tue, Apr 16, 2019 at 6:23 PM Dan Davis  wrote:
>
> Claude,
>
> I've now noticed you are actually the author of PA4RDF, and I'm wondering
> what your plans are for releasing a version of PA4RDF that uses Jena 3+.
>  I would ask in a github issue, but I'm sure you've got one, as there are
> snapshots, and it seems better to keep the conversation all together.
>
> On Mon, Apr 8, 2019 at 1:34 PM Dan Davis  wrote:
>
> > That is true - I was about to post that PA4RDF seems just what is needed.
> >
> > It is even possible that your RDF may have more properties than you expose
> > at the pojo layer.   The examples deal mostly with Data Properties, and no
> > query language is mentioned.  It is quite possible that this is not a
> > deficit of the package, but of the documentation.  I also note that the
> > latest non-snapshot release is still on Jena 2, but that
> >
> > https://oss.sonatype.org/content/repositories/snapshots/org/xenei/PA4RDF/2.0-SNAPSHOT/
> > is up to date enough.
> >
> > On Mon, Apr 8, 2019 at 8:51 AM Claude Warren  wrote:
> >
> >> PA4RDF is an OGM.  It just views the subject of triples as an
> >> identifier for one or my POJOs.  The properties are mapped to
> >> getter/setter methods and the objects are the values.  The entity
> >> manager manages converting URL objects to POJOs.  PA4RDF uses 2
> >> annotations, one to map the POJO to the RDF subject and the other to
> >> identify the POJO properties that should be mapped to properties.
> >>
> >> Claude
> >>
> >> On Mon, Apr 8, 2019 at 7:00 AM Dan Davis  wrote:
> >> >
> >> > To me, it seems unfortunate to make the Resource do all the work, and
> >> make
> >> > it more than the subject of triples.  Some labeled property graphs
> >> define
> >> > an OGM (Object/Graph Mapping).  I wonder whether that concept has any
> >> > bearing here.
> >> >
> >> > Anyway, I've wished for an OGM sort of mapping to a POJO for awhile.
> >>  One
> >> > annotation to say its a Jena resource.   Another Annotation to say which
> >> > method returns its URI, and give an optional prefix.
> >> >
> >> > Additional annotations to label setters or properties by predicate -
> >> this
> >> > would not be too bad coupled with org.apache.jena.vocabulary.
> >> >
> >> > Might need different annotations for object properties and data
> >> properties.
> >> >
> >> >
> >> > On Sun, Apr 7, 2019 at 7:54 AM Claude Warren  wrote:
> >> >
> >> > > Thomas,
> >> > >
> >> > > I think that PA4RDF does/did what you want to do
> >> > > (https://github.com/Claudenw/PA4RDF)  Basically, you define
> >> FoafPerson
> >> > > as an interface and annotate the getX methods to identify the
> >> > > properties that identify the value.  Something like:
> >> > >
> >> > > {noformat}
> >> > > @Subject(namespace = "http://xmlns.com/foaf/0.1/";,
> >> > > class="http://xmlns.com/foaf/0.1/FoafPerson";)
> >> > > public interface FoafPerson {
> >> > >
> >> > > String getName();
> >> > >@Predicate
> >> > > vod setName( String name );
> >> > > boolean hasName();
> >> > > }
> >> > > {noformat}
> >> > >
> >> > > Create an use an entity manager by requesting
> >> > >
> >> > > {noformat}
> >> > > FoafPerson fp = entityManager.read( resource, FoafPerson.class );
> >> 

Re: Getting EnhGraph / ModelCom from Resource / Model ?

2019-04-08 Thread Claude Warren
PA4RDF is an OGM.  It just views the subject of triples as an
identifier for one or my POJOs.  The properties are mapped to
getter/setter methods and the objects are the values.  The entity
manager manages converting URL objects to POJOs.  PA4RDF uses 2
annotations, one to map the POJO to the RDF subject and the other to
identify the POJO properties that should be mapped to properties.

Claude

On Mon, Apr 8, 2019 at 7:00 AM Dan Davis  wrote:
>
> To me, it seems unfortunate to make the Resource do all the work, and make
> it more than the subject of triples.  Some labeled property graphs define
> an OGM (Object/Graph Mapping).  I wonder whether that concept has any
> bearing here.
>
> Anyway, I've wished for an OGM sort of mapping to a POJO for awhile.   One
> annotation to say its a Jena resource.   Another Annotation to say which
> method returns its URI, and give an optional prefix.
>
> Additional annotations to label setters or properties by predicate - this
> would not be too bad coupled with org.apache.jena.vocabulary.
>
> Might need different annotations for object properties and data properties.
>
>
> On Sun, Apr 7, 2019 at 7:54 AM Claude Warren  wrote:
>
> > Thomas,
> >
> > I think that PA4RDF does/did what you want to do
> > (https://github.com/Claudenw/PA4RDF)  Basically, you define FoafPerson
> > as an interface and annotate the getX methods to identify the
> > properties that identify the value.  Something like:
> >
> > {noformat}
> > @Subject(namespace = "http://xmlns.com/foaf/0.1/";,
> > class="http://xmlns.com/foaf/0.1/FoafPerson";)
> > public interface FoafPerson {
> >
> > String getName();
> >@Predicate
> > vod setName( String name );
> > boolean hasName();
> > }
> > {noformat}
> >
> > Create an use an entity manager by requesting
> >
> > {noformat}
> > FoafPerson fp = entityManager.read( resource, FoafPerson.class );
> > {noformat}
> >
> > Will create fp as an instance of FoafPerson backed by the specified
> > resource.
> >
> > PA4RDF used dynamic proxies so it can create objects that are
> > combinations of objects.  All objects returned implement
> > ResourceWrapper so you can allways call fp.getResource() to get the
> > resource back from the object.
> >
> > Claude
> >
> > Note: If you decide to go this route use the 1.1 version as the
> > current head branch is a different implementation and has some
> > significant issues.
> >
> >
> >
> > On Wed, Apr 3, 2019 at 8:33 AM Thomas Francart
> >  wrote:
> > >
> > > Hello
> > >
> > > Le mar. 2 avr. 2019 à 21:54, ajs6f  a écrit :
> > >
> > > > Personality and related types are described briefly here:
> > > >
> > > >
> > > >
> > https://jena.apache.org/documentation/notes/jena-internals.html#enhanced-nodes
> > >
> > >
> > > Thanks, this describes what I would like to do :
> > >
> > > ```
> > >
> > >1. define an interface I for the new enhanced node. (You could use
> > just
> > >the implementation class, but we've stuck with the interface, because
> > there
> > >might be different implementations)
> > >2. define the implementation class C. This is just a front for the
> > >enhanced node. All the state of C is reflected in the graph (except
> > for
> > >caching; but beware that the graph can change without notice).
> > >3. define an Implementation class for the factory. This class defines
> > >methods canWrap and wrap, which test a node to see if it is allowed to
> > >represent I and construct an implementation of Crespectively.
> > >4. Arrange that the personality of the graph maps the class of I to
> > the
> > >factory. At the moment we do this by using (a copy of) the built-in
> > graph
> > >personality as the personality for the enhanced graph.
> > >
> > > ```
> > >
> > > >
> > > >
> > > > I know little about them, but to my understanding, that is not a
> > > > well-developed part of Jena and the technical ideas there are not under
> > > > active development. You are noticing that when you find the
> > constructor odd
> > > > and hard to fill; that's in part because it mixes Jena's API (Resource,
> > > > Statement, Model, etc.) with its SPI (Node, Triple, Graph, etc.).
> > > >
> > > > Could you tell u

Re: Getting EnhGraph / ModelCom from Resource / Model ?

2019-04-07 Thread Claude Warren
Thomas,

I think that PA4RDF does/did what you want to do
(https://github.com/Claudenw/PA4RDF)  Basically, you define FoafPerson
as an interface and annotate the getX methods to identify the
properties that identify the value.  Something like:

{noformat}
@Subject(namespace = "http://xmlns.com/foaf/0.1/";,
class="http://xmlns.com/foaf/0.1/FoafPerson";)
public interface FoafPerson {

String getName();
   @Predicate
vod setName( String name );
boolean hasName();
}
{noformat}

Create an use an entity manager by requesting

{noformat}
FoafPerson fp = entityManager.read( resource, FoafPerson.class );
{noformat}

Will create fp as an instance of FoafPerson backed by the specified resource.

PA4RDF used dynamic proxies so it can create objects that are
combinations of objects.  All objects returned implement
ResourceWrapper so you can allways call fp.getResource() to get the
resource back from the object.

Claude

Note: If you decide to go this route use the 1.1 version as the
current head branch is a different implementation and has some
significant issues.



On Wed, Apr 3, 2019 at 8:33 AM Thomas Francart
 wrote:
>
> Hello
>
> Le mar. 2 avr. 2019 à 21:54, ajs6f  a écrit :
>
> > Personality and related types are described briefly here:
> >
> >
> > https://jena.apache.org/documentation/notes/jena-internals.html#enhanced-nodes
>
>
> Thanks, this describes what I would like to do :
>
> ```
>
>1. define an interface I for the new enhanced node. (You could use just
>the implementation class, but we've stuck with the interface, because there
>might be different implementations)
>2. define the implementation class C. This is just a front for the
>enhanced node. All the state of C is reflected in the graph (except for
>caching; but beware that the graph can change without notice).
>3. define an Implementation class for the factory. This class defines
>methods canWrap and wrap, which test a node to see if it is allowed to
>represent I and construct an implementation of Crespectively.
>4. Arrange that the personality of the graph maps the class of I to the
>factory. At the moment we do this by using (a copy of) the built-in graph
>personality as the personality for the enhanced graph.
>
> ```
>
> >
> >
> > I know little about them, but to my understanding, that is not a
> > well-developed part of Jena and the technical ideas there are not under
> > active development. You are noticing that when you find the constructor odd
> > and hard to fill; that's in part because it mixes Jena's API (Resource,
> > Statement, Model, etc.) with its SPI (Node, Triple, Graph, etc.).
> >
> > Could you tell us a little more about what you are doing? Why do you want
> > to extend Resource?
> >
>
> I'd like to do :
>
> 1. Define an interface that extends Resource :
>
> ```
> public interface FoafPerson extends Resource {
>public String getName();
> }
> ```
>
> 2. Define an implementation :
>
> ```
> public interface FoafPersonImpl implements FoafPerson extends ResourceImpl {
>// constructor : how will this be called ???
>public FoafPersonImpl(Node node, EnhGraph graph) {
> super(node, graph);
> }
>
>@Override
>public String getName() {
>  return this.getProperty(FOAF.name).getString();
>}
> }
> ```
>
> 3. Define an Implementation class :
>
> ... here, I am lost
>
> 4. Maps the class I to the factory :
>
> ... here, I am lost
>
> 5. Use this; I'd like to be able to do :
>
> ```
> Model m = ...;
> Resource r = ...;
> // obtain a view on the Resource as a Foaf Person :
> FoafPerson person = r.as(FoafPerson.class);
> System.out.println(person.getName());
> ```
>
> Any help in filling the blanks above would be appreciated !
>
> Thanks
> Thomas
>
>
> > ajs6f
> >
> > > On Apr 2, 2019, at 3:48 PM, Thomas Francart 
> > wrote:
> > >
> > > Hello
> > >
> > > I would like to declare data structure with an Interface that extends
> > Jena
> > > Resource and its implementation that extends Jena ResourceImpl in the
> > same
> > > way as
> > >
> > https://github.com/TopQuadrant/shacl/blob/a3f54abeffc691ff0b15bee7f049741eb6e00878/src/main/java/org/topbraid/shacl/model/SHShape.java
> > > (interface) and
> > >
> > https://github.com/TopQuadrant/shacl/blob/a3f54abeffc691ff0b15bee7f049741eb6e00878/src/main/java/org/topbraid/shacl/model/impl/SHShapeImpl.java
> > > (implementation, which indirectly extends ResourceImpl).
> > > Is it a common and good practice ?
> > >
> > > The constructor of the implementation takes as an input (Node node,
> > > EnhGraph graph).
> > > I don't know how to obtain or build these Node and EnhGraph from plain
> > > Resource / Model I am used to. I will be working with memory or
> > TDB-backed
> > > Models, if that matters. How can I obtain or build these so that I can
> > call
> > > the constructor of my data structure extending ResourceImpl, with a Node
> > > and EnhGraph instance ?
> > >
> > > I feel this could be related to "Personality"

Re: Delete all nested triples

2019-02-21 Thread Claude Warren
Closest thing to iterative might be to us an "IN" clause.

On Thu, Feb 21, 2019 at 9:21 AM Martynas Jusevičius 
wrote:

> You cannot. SPARQL is based on pattern matching.
>
> But why would you need to? Maybe back up and explain that.
>
> On Thu, Feb 21, 2019 at 6:32 AM ganesh chandra
>  wrote:
> >
> > Thanks fo the solution. I was hoping if there was some way we can write
> > something iterative in the query.
> >
> > Thanks,
> > Ganesh
> >
> > On Wed, Feb 20, 2019 at 9:28 PM Paul Tyson 
> wrote:
> >
> > > On Wed, 2019-02-20 at 17:27 -0700, ganesh chandra wrote:
> > > > Hello All,
> > > > My data looks something like this:
> > > >  a something:Entity ;
> > > > something:privateData [ a something:PrivateData ;
> > > > something:jsonContent "{\"fileType\":
> \”jp\"}"^^xsd:string ;
> > > >   something:modeData [a something:data1
> > > >   system:code 1234]
> > > > something:system  ] ;
> > > >
> > > > There are many like the above one and I am trying to write the query
> to
> > > delete all the data if the id matches. How I should I go about doing
> this?
> > > >
> > >
> > > If the data is always in this shape, something like this should work:
> > >
> > > prefix something: 
> > > prefix system: 
> > > DELETE WHERE {
> > >  a something:Entity ;
> > > something:privateData ?_a.
> > > ?_a  a something:PrivateData ;
> > > something:jsonContent ?json ;
> > > something:modeData ?_b;
> > > something:system ?filetype .
> > > ?_b a something:data1;
> > > system:code ?code.
> > > }
> > >
> > > This just replaces the blank nodes with sparql variables.
> > >
> > > It's a good idea to test DELETE updates thoroughly, because they can
> > > often cause surprises. One way to see what will be deleted is to change
> > > the DELETE to SELECT and run it as a query. That will show you exactly
> > > what triples will be deleted.
> > >
> > > Regards,
> > > --Paul
> > >
> > >
> > > --
> > Ganesh Chandra S
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: CSV to rdf

2019-02-15 Thread Claude Warren
Try Any23 (http://any23.apache.org/) its job in life is to covert from
other formats to RDF and it has a CSV conversion module.

Claude

On Thu, Feb 14, 2019 at 1:59 PM elio hbeich  wrote:

> Dear all
>
> Do you have any suggestion about tools or XSLT that can transform CSV to
> RDF
>
> Thank you in advance,
> Elio HBEICH
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Reasoner and updating data in Fuseki

2018-12-31 Thread Claude Warren
I was afraid that was the answer.  I am thinking that I might try to build
a new reasoner model implementation that would specify 3 graphs: base truth
graph, rules and derived truth graph.  The graph would expose base truth,
rules, and base+derived truth as endpoints for query and allow updating for
base truth or rules.  I think the reasoner has the ability to accept new
rules it is just the fuseki interface that does not provide access to them.

Not sure when I can get to explore this idea. :(

Claude

On Sun, Dec 30, 2018 at 5:14 PM ajs6f  wrote:

> Claude--
>
> I don't think you ever got a response on this. I'm not very expert with
> the inference framework, but am I correct in understanding your question to
> be: how, using only the assembler tools and Fuseki, can one update the
> schema of an inference model?
>
> If so, I don't think that _is_ possible (I hope I'm wrong and someone
> corrects me!), but some options that might be interesting:
>
> • Building a new dataset on-demand for schema changes. I think you should
> be able to use a new assembler doc for each change:
>
>
> https://jena.apache.org/documentation/fuseki2/fuseki-server-protocol.html#assembler-example
>
> (apparently we need an example there, :sigh:) and if your actual triples
> are resident on disk (in TDB, say) you should be able to avoid copying them.
>
> • Using a Fuseki extension. I haven't looked at this much (it's pretty
> new) but Andy has done a large amount of work to offer custom endpoints for
> Fuseki. It might be possible to offer an endpoint to load a new schema or
> do some other coarser manipulation to get what you need done.
>
> ajs6f
>
> > On Dec 6, 2018, at 9:55 AM, Claude Warren  wrote:
> >
> > Somehow I missed this response.
> >
> > If the schema and the data are in the same graph everything is fine, but
> I
> > have schema and data in separate graphs.
> >
> > So without putting the graphs together into a single graph is it
> possible,
> > via fuseki, to update the "schema" rules in an inference graph?
> >
> > Even if I put the schema and the data together in a union graph and run
> the
> > inferencer on that the updates still have to go to one or the other graph
> > right?  So i have to start mixing the schema and the data.
> >
> > Is there a way around this?  If not, does anyone have any idea how hard
> it
> > would be to implement an inference model that would allow updates of the
> > schema data.  I figure if that is available I can plug that into Fuseki
> > with a couple of custom hooks and do the updates.
> >
> > Claude
> >
> > On Fri, Nov 30, 2018 at 1:11 PM Andy Seaborne  wrote:
> >
> >>
> >>
> >> On 27/11/2018 16:57, Claude Warren wrote:
> >>> I have a case using Fuseki.  I have 2 named graphs call them "data" and
> >>> "schema".  Data contains all the data, schema contains all the RDFS
> based
> >>> triples.
> >>>
> >>> I can configure Fuseki so that an inference model uses an RDF Reasoner
> to
> >>> apply the "schmea" rules to the "data", call this graph "inf".  In
> >> addition
> >>> I think fuseki can be configured so that any updates to "inf" are added
> >> as
> >>> triples to "data".
> >>>
> >>> My issue is that there are periodic updates to the "schema" (e.g. when
> >> new
> >>> a RDF class is created).  My understanding is that the reasoners do not
> >>> like it when you make updates to the graphs they are manipulating
> without
> >>> going through the reasoner itself.  So is there a way to make updates
> to
> >>> the "schema" graph via fuseki such that the "inf" graph/reasoner will
> be
> >>> happy?
> >>
> >> What happens if schema and data are in the same graph and so the updates
> >> do go via the inference engine?
> >>
> >>>
> >>> Claude
> >>>
> >>
> >
> >
> > --
> > I like: Like Like - The likeliest place on the web
> > <http://like-like.xenei.com>
> > LinkedIn: http://www.linkedin.com/in/claudewarren
>
>

-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Reasoner and updating data in Fuseki

2018-12-06 Thread Claude Warren
Somehow I missed this response.

If the schema and the data are in the same graph everything is fine, but I
have schema and data in separate graphs.

So without putting the graphs together into a single graph is it possible,
via fuseki, to update the "schema" rules in an inference graph?

Even if I put the schema and the data together in a union graph and run the
inferencer on that the updates still have to go to one or the other graph
right?  So i have to start mixing the schema and the data.

Is there a way around this?  If not, does anyone have any idea how hard it
would be to implement an inference model that would allow updates of the
schema data.  I figure if that is available I can plug that into Fuseki
with a couple of custom hooks and do the updates.

Claude

On Fri, Nov 30, 2018 at 1:11 PM Andy Seaborne  wrote:

>
>
> On 27/11/2018 16:57, Claude Warren wrote:
> > I have a case using Fuseki.  I have 2 named graphs call them "data" and
> > "schema".  Data contains all the data, schema contains all the RDFS based
> > triples.
> >
> > I can configure Fuseki so that an inference model uses an RDF Reasoner to
> > apply the "schmea" rules to the "data", call this graph "inf".  In
> addition
> > I think fuseki can be configured so that any updates to "inf" are added
> as
> > triples to "data".
> >
> > My issue is that there are periodic updates to the "schema" (e.g. when
> new
> > a RDF class is created).  My understanding is that the reasoners do not
> > like it when you make updates to the graphs they are manipulating without
> > going through the reasoner itself.  So is there a way to make updates to
> > the "schema" graph via fuseki such that the "inf" graph/reasoner will be
> > happy?
>
> What happens if schema and data are in the same graph and so the updates
> do go via the inference engine?
>
> >
> > Claude
> >
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Reasoner and updating data in Fuseki

2018-11-27 Thread Claude Warren
I have a case using Fuseki.  I have 2 named graphs call them "data" and
"schema".  Data contains all the data, schema contains all the RDFS based
triples.

I can configure Fuseki so that an inference model uses an RDF Reasoner to
apply the "schmea" rules to the "data", call this graph "inf".  In addition
I think fuseki can be configured so that any updates to "inf" are added as
triples to "data".

My issue is that there are periodic updates to the "schema" (e.g. when new
a RDF class is created).  My understanding is that the reasoners do not
like it when you make updates to the graphs they are manipulating without
going through the reasoner itself.  So is there a way to make updates to
the "schema" graph via fuseki such that the "inf" graph/reasoner will be
happy?

Claude

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Push notification?

2018-08-10 Thread Claude Warren
I was actualy thinking of putting a message on a queue so that anyone
listening could pick it up.  Most of our infrastructure is based on push
rather than pull.  Particularly when our data sources are in a more secure
zone.  We only want to connect from the most secure zone to the lesser
secured zone not visa-versa  if possible.  A push would make much of this
easier.

worst case I can create a poller that does the push but I suppose I was
looking for a more elegant solution.

Claude

On Fri, Aug 10, 2018 at 1:28 PM, Andy Seaborne  wrote:

> General comment about Fuseki (or any server really):
>
> Having server track the clients so it can initiate sending is a real pain
> to do: clients go away, client crash without signing off, clients get
> moved, clients go slow, etc etc.
>
> Having the client poll the server may seem like excessive work but the
> reality is that many systems are built that way underneath for all those
> client reasons.
>
> The trick is to make the "ping" cheap.
>
> For RDF, keep triple with the tiemstamp of last change and get it each
> time.
>
> Or to put it another way, do Last-Modified or ETags for the data.
>
> Andy
>
>
>
> On 09/08/18 17:29, Nouwt, B. (Barry) wrote:
>
>> Hi Claude, another hack that might work (but is probably a bit devious),
>> is using a GenericRuleReasoner rule instead of a ASK query, like:
>>
>> [myAskRule: (?some pre:pattern bla:Eek) (?some eek:prop
>> "1234"^^xsd:integer) (...) -> print("The condition is TRUE!!!")
>> customBuiltinThatNotifies(?some, "s...@eek.org") ]
>>
>> Note that you would need to create a custom 'notification' Builtin that
>> contains the actual notification logic.
>>
>> Regards, Barry
>>
>> -Original Message-
>> From: ajs6f 
>> Sent: donderdag 9 augustus 2018 16:59
>> To: users@jena.apache.org
>> Subject: Re: Push notification?
>>
>> Hey, Claude--
>>
>> Is it a particular ASK query or just any time any ASK query returns
>> successfully? I don't know enough about ARQ to really answer your question,
>> but a hack might be to see if there is an exact pattern of calls to find()
>> that occur for that ASK and only that ASK on the underlying DatasetGraph
>> and use DatasetGraphWrapper to intercept and fire your notifications. I'm
>> guessing there's a much cleaner better way to do that at ARQ, which will
>> actually know about ASK queries, and I'm looking forward to someone who
>> knows more than I telling us what that is. :grin:
>>
>> ajs6f
>>
>> On Aug 9, 2018, at 10:39 AM, Claude Warren  wrote:
>>>
>>> Does anyone have a way to have Jena/Fuseki perform push notifications?
>>> I am looking for a mechanism whereby I can create an ASK query and be
>>> notified when it succeeds.
>>>
>>> Any ideas would be appreciated.
>>>
>>> --
>>> I like: Like Like - The likeliest place on the web
>>> <http://like-like.xenei.com>
>>> LinkedIn: http://www.linkedin.com/in/claudewarren
>>>
>>
>> This message may contain information that is not intended for you. If you
>> are not the addressee or if this message was sent to you by mistake, you
>> are requested to inform the sender and delete the message. TNO accepts no
>> liability for the content of this e-mail, for the manner in which you use
>> it and for damage of any kind resulting from the risks inherent to the
>> electronic transmission of messages.
>>
>>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Fuseki on Docker

2018-08-10 Thread Claude Warren
Bruno,

Thanks for the investigative effort.  It is good to know that mmap files
work in Docker, though worrying as well for the reasons you cite.

thx again, and I will post here if I find anything more.

Claude

On Fri, Aug 10, 2018 at 2:01 AM, Bruno P. Kinoshita <
brunodepau...@yahoo.com.br.invalid> wrote:

> Hi Claude,
>
> >I am running the stain/fuseki based version of fuseki on docker.
>
> I use the same when running Skosmos or when I want to quickly test
> something in Jena and don't have my Eclipse workspace.
>
>
> >I am wondering how well the memory mapped files in TDB work inside the
> >docker container.  Or even if they do at all?  Does anyone know?
>
> I assumed it would work, but never tried mmap within a container. So found
> a repo with a test for mmap in Docker.
>
>
> https://github.com/eugeneware/mmap
>
>
> So did a quick test
>
>
> ```shell
> $ cd /tmp
>
> $ git clone https://github.com/eugeneware/mmap.git
>
> $ cd mmap
> $ docker pull stain/jena-fuseki:latest
> $ docker run --name jena_fuseki_1 --rm -i -t -v $(pwd -P):/srv
> stain/jena-fuseki bash
> bash-4.3# apk add --no-cache make gcc g++
> ###
> bash-4.3# cd /srv
> bash-4.3# gcc -o mmap mmap-problem.c # alpine and glibc issues, have to
> recompile within container
> bash-4.3# ./mmap test.txt
> hello
> world
> mmap works from internal filesyste - hoorah!
>
> ```
>
> So apparently mmap should work OK. I have no idea how well it works in
> relation to having multiple containers using it, isolation, performance,
> etc. But hope it helps a bit at least. If you learn more about it, please
> share here (:
>
> Cheers
> Bruno
>
>
>
>
>
> 
> From: Claude Warren 
> To: users@jena.apache.org
> Sent: Friday, 10 August 2018 2:50 AM
> Subject: Fuseki on Docker
>
>
>
> I am running the stain/fuseki based version of fuseki on docker.
>
>
> I am wondering how well the memory mapped files in TDB work inside the
>
> docker container.  Or even if they do at all?  Does anyone know?
>
>
>
> --
>
> I like: Like Like - The likeliest place on the web
>
> <http://like-like.xenei.com>
>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: exception in 3.7.0 not in 3.6.0

2018-08-09 Thread Claude Warren
I managed to get time to run the build with 3.8.0 and the problem is fixed.
with that version.

On Thu, Aug 9, 2018 at 3:34 PM, Claude Warren  wrote:

> I create an RDFConnection to a local Model (from 
> ModelFactory.createDefaultModel()
> ) (yeah I know that is a strange way to go about it but I am trying to
> isolate the code using the RDFConnection to ensure that we can talk to
> remote repositories)
>
> So what is happening is
>
> RDFConnection is called to perform an update.
> The model is updated
> A listener on the model detects a change and attempts to execute an ASK
> query through same RDFConnection and the exception is thrown.
>
> I'll try 3.8.0 as soon as I can (probably tomorrow)
>
>
>
>
>
>
> On Thu, Aug 9, 2018 at 2:20 PM, Andy Seaborne  wrote:
>
>> Sounds familiar.
>>
>> Could you try 3.8.0 please.
>> JENA-1539
>>
>>
>> On 09/08/18 12:58, Claude Warren wrote:
>>
>>> I get the following exception in some code in 3.7.0 but not in 3.6.0
>>>
>>> Exception in thread "SemaphoreListener"
>>> org.apache.jena.sparql.JenaTransactionException:
>>> Already in a transaction of a different type: outer=WRITE : inner=READ
>>>
>>
>> How are you using transactions (what's the stack)?
>>
>>
>>> The statement is correct.  But I thought that a READ was possible when a
>>> Write was active.
>>>
>>
>> Yes but this is not what it is about.
>>
>> This is one transaction inside another. True nested tranactions aren't
>> supported. Instead, the current transaction is continued if it is
>> compatible:  current=W wanted inner=R is compatible, current=R, wanted=W is
>> not.
>>
>>
>>> In the code I have a ModelListener that when it sees a specific change
>>> executes an Ask query.  both the original insert and the ask are executed
>>> via an RDFConnection (and perhaps herein lays the problem?)
>>>
>>
>> What sort of RDFConnection?
>>
>>
>>
>>> Does the RDFConnection start a new transaction?  If I executed query
>>> directly against the model that the update was in (rather than via the
>>> RDFConnection) would that put both queries within the same execution?
>>>
>>> Claude
>>>
>>>
>>>
>>>
>
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Fuseki on Docker

2018-08-09 Thread Claude Warren
I am running the stain/fuseki based version of fuseki on docker.

I am wondering how well the memory mapped files in TDB work inside the
docker container.  Or even if they do at all?  Does anyone know?


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Push notification?

2018-08-09 Thread Claude Warren
Does anyone have a way to have Jena/Fuseki perform push notifications?  I
am looking for a mechanism whereby I can create an ASK query and be
notified when it succeeds.

Any ideas would be appreciated.

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: exception in 3.7.0 not in 3.6.0

2018-08-09 Thread Claude Warren
I create an RDFConnection to a local Model (from
ModelFactory.createDefaultModel() ) (yeah I know that is a strange way to
go about it but I am trying to isolate the code using the RDFConnection to
ensure that we can talk to remote repositories)

So what is happening is

RDFConnection is called to perform an update.
The model is updated
A listener on the model detects a change and attempts to execute an ASK
query through same RDFConnection and the exception is thrown.

I'll try 3.8.0 as soon as I can (probably tomorrow)






On Thu, Aug 9, 2018 at 2:20 PM, Andy Seaborne  wrote:

> Sounds familiar.
>
> Could you try 3.8.0 please.
> JENA-1539
>
>
> On 09/08/18 12:58, Claude Warren wrote:
>
>> I get the following exception in some code in 3.7.0 but not in 3.6.0
>>
>> Exception in thread "SemaphoreListener"
>> org.apache.jena.sparql.JenaTransactionException:
>> Already in a transaction of a different type: outer=WRITE : inner=READ
>>
>
> How are you using transactions (what's the stack)?
>
>
>> The statement is correct.  But I thought that a READ was possible when a
>> Write was active.
>>
>
> Yes but this is not what it is about.
>
> This is one transaction inside another. True nested tranactions aren't
> supported. Instead, the current transaction is continued if it is
> compatible:  current=W wanted inner=R is compatible, current=R, wanted=W is
> not.
>
>
>> In the code I have a ModelListener that when it sees a specific change
>> executes an Ask query.  both the original insert and the ask are executed
>> via an RDFConnection (and perhaps herein lays the problem?)
>>
>
> What sort of RDFConnection?
>
>
>
>> Does the RDFConnection start a new transaction?  If I executed query
>> directly against the model that the update was in (rather than via the
>> RDFConnection) would that put both queries within the same execution?
>>
>> Claude
>>
>>
>>
>>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


exception in 3.7.0 not in 3.6.0

2018-08-09 Thread Claude Warren
I get the following exception in some code in 3.7.0 but not in 3.6.0

Exception in thread "SemaphoreListener"
org.apache.jena.sparql.JenaTransactionException:
Already in a transaction of a different type: outer=WRITE : inner=READ

The statement is correct.  But I thought that a READ was possible when a
Write was active.

In the code I have a ModelListener that when it sees a specific change
executes an Ask query.  both the original insert and the ask are executed
via an RDFConnection (and perhaps herein lays the problem?)

Does the RDFConnection start a new transaction?  If I executed query
directly against the model that the update was in (rather than via the
RDFConnection) would that put both queries within the same execution?

Claude



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: public SPARQL GET requests, while restricting PUT, POST, DELETE requests to localhost

2018-07-21 Thread Claude Warren
You can also use the permissions system to enforce full CRUD restrictions
if needed.  Though it does take a bit more work.

On Thu, Jul 19, 2018 at 2:57 PM, ajs6f  wrote:

> Keep in mind that you can wire up your endpoints just as you like:
>
> https://jena.apache.org/documentation/fuseki2/fuseki-configuration.html
>
> including multiple endpoints for any given service and so on.
>
> ajs6f
>
> > On Jul 19, 2018, at 8:08 AM, Andy Seaborne  wrote:
> >
> >
> >
> > On 18/07/18 18:53, Dan Pritts wrote:
> >> I use shiro.ini as follows.  I did this so I could have our monitoring
> system do queries, while our primary consumer software had full
> privileges. I presume, but honestly don't know for sure, that the
> /foo/query URL allows queries only.
> >
> > It does - if you use a service name /query (and the config does not wire
> that to.say, update :-) then the request goes to the query action handler
> which only groks SPARQL queries and can't go to anything else.
> >
> >Andy
> >
> >> Note that the write_user needs to have both reader_role and writer_role.
> >> # Licensed under the terms of http://www.apache.org/
> licenses/LICENSE-2.0
> >> [main]
> >> # we aren't using TLS because we are proxied by apache which handles
> that for us
> >> ssl.enabled = false
> >> # a simple sha256 is adequate for this internal-only usage
> >> # to do better, look at https://shiro.apache.org/
> command-line-hasher.html
> >> credentialsMatcher = org.apache.shiro.authc.credential.
> Sha256CredentialsMatcher
> >> #iniRealm=org.apache.shiro.realm.text.IniRealm
> >> iniRealm.credentialsMatcher = $credentialsMatcher
> >> [users]
> >> # adding users here implicitly adds "iniRealm =
> org.apache.shiro.realm.text.IniRealm"
> >> # to get the sha hash:
> >> #echo -n "passwordhere" | sha256sum  # but watch out, the password
> is in your shell history now!
> >> # the -n is important, otherwise you'll get a newline.
> >> # or get the Shiro Hasher tool
> >> write_user=shasumhere,reader_role,writer_role
> >> read_user=shasumhere,reader_role
> >> # roles that you implicitly created in the users section don't
> necessarily need to be listed here
> >> [roles]
> >> # this is a "first match wins" config.
> >> [urls]
> >> ## Control functions open to anyone
> >> /$/stats  = anon
> >> /$/server  = anon
> >> /$/ping   = anon
> >> /fcrepo/query = authcBasic, roles[reader_role]
> >> /** = authcBasic, roles[writer_role]
> >> Andy Seaborne wrote on 7/17/18 12:32 PM:
> >>> Hi there,
> >>>
> >>> > More concretely my question is, are PUT, POST, and DELETE requests
> >>> > considered "administrative" functions?
> >>>
> >>> Admin operations are HTTP requests to the UI - the URLs start "/$/"
> not the graph management PUT, POST, and DELETE requests of SPARQL Graph
> Store Protocol (and the dataset URL also respects plain REST PUT, POST).
> >>>
> >>> SPARQL Query can be over POST - large queries can be over the
> practical limits of HTTP GET and POST allows a query of any size to be set.
> >>>
> >>> If you want detailed control, you can do some configuration by using
> the built-in Apache Shiro but it can be easier to put the Fuseki server
> behind a reverse proxy (Apache httpd, nginx, ...) because these systems
> have more detailed and sophisticated permissions control systems including
> load control.
> >>>
> >>> Or run in Apache Tomcat as a WAR and use Tomcat controls.
> >>>
> >>>
> >>> Andy
> >>>
> >>> Apache Shiro + REST
> >>> https://stackoverflow.com/questions/41918916/apache-
> shiro-http-method-level-permission
> >>>
> >>> On 17/07/18 11:20, Jeffrey C. Witt wrote:
>  I was reading this security documentation (
>  https://jena.apache.org/documentation/fuseki2/fuseki-security.html)
> today
>  and had a small question.
> 
>  Basically, I would like be able to make PUT, POST, DELETE requests
> from
>  localhost, while restricting public access to the SPARQL endpoint
> (requests
>  NOT originating from localhost) to GET only requests.
> 
>  My question is whether according to the above documentation that is
> the
>  default configuration.
> 
>  The docs say:
> 
>  "In its default configuration, SPARQL endpoints are open to the
> public but
>  administrative functions are limited to localhost."
> 
>  More concretely my question is, are PUT, POST, and DELETE requests
>  considered "administrative" functions?
> 
>  Thanks
>  jw
> 
> >> --
> >> Dan Pritts
> >> ICPSR Computing & Network Services
> >> University of Michigan
> >> 
>
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Suppressing SPARQL queries from log

2018-06-29 Thread Claude Warren
You don't specify exactly which log entries you want to avoid but you can
do something like:

log4.logger.org.apache.jena.sparql=WARN

would set all the sparql logs to WARN level.  Assuming you are using log4j.

On Fri, Jun 29, 2018 at 12:40 PM, Mikael Pesonen  wrote:

>
> Thanks for the reply. Our log is going to journald so I think it's
> compressed already.
>
>
> On 28.6.2018 18:05, Dan Pritts wrote:
>
>> Not a direct answer, but it may help you to switch to hourly rotation
>> instead of daily. Then you can compress the files hourly instead of daily.
>> As I'm sure you've noticed, they compress quite well.
>>
>> I use the following in log4j.properties.
>>
>> # http://www.codejava.net/coding/configure-log4j-for-creating-
>> daily-rolling-log-files
>> # also see https://github.com/epimorphics/sedgemoor-data/blob/master/
>> package/fuseki-config/log4j.properties
>> log4j.rootLogger=INFO,FusekiFileLog
>> log4j.appender.FusekiFileLog=org.apache.log4j.DailyRollingFileAppender
>> log4j.appender.FusekiFileLog.File=/var/log/fuseki/fuseki.log
>> #log4j.appender.FusekiFileLog.DatePattern='.'-MM-dd
>> # -HH does an hourly rollover
>> log4j.appender.FusekiFileLog.DatePattern='.'-MM-dd-HH
>> log4j.appender.FusekiFileLog.layout=org.apache.log4j.PatternLayout
>> log4j.appender.FusekiFileLog.layout.ConversionPattern=[%d{MMdd-HH:mm:ss}]
>> %-10c{1} %-5p %m%n
>>
>>
>>
>>
>>
>> Mikael Pesonen wrote on 6/28/18 7:37 AM:
>>
>>>
>>> Hi,
>>>
>>> we are having trouble with Fuseki log size. Easiest would be to switch
>>> to WARN level, but response times are quite usefull. So is it possible
>>> somehow to remove just the queries from INFO level?
>>>
>>> Br
>>>
>>>
>> --
>> Dan Pritts
>> ICPSR Computing & Network Services
>> University of Michigan
>> 
>>
>
> --
> Lingsoft - 30 years of Leading Language Management
>
> www.lingsoft.fi
>
> Speech Applications - Language Management - Translation - Reader's and
> Writer's Tools - Text Tools - E-books and M-books
>
> Mikael Pesonen
> System Engineer
>
> e-mail: mikael.peso...@lingsoft.fi
> Tel. +358 2 27
> 9 3300
>
> Time zone: GMT+2
>
> Helsinki Office
> Eteläranta 10
> FI-00130 Helsinki
> FINLAND
>
> Turku Office
> Kauppiaskatu 5 A
> FI-20100 Turku
> FINLAND
>
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: BGP, path, and optimization. Was: Jena Usage Report: Performance, bugs and feature requests

2018-06-09 Thread Claude Warren
I've been on holiday so I haven't responded on this topic but now that I am
home and can type on a "real" keyboard 

I have read back over this chain and the previous topic and am still
confused.

I would think that the optimizer would

1) convert the path statement to the equivalent multi-line statements.
2) rearrange the query to be optimal.

When rearranging for optimal I would expect that triple patterns with the
fewest vars would be evaluated first:

type rdfs:subClassOf owl:Thing .

followed by patterns with newly discovered bindings.

?X rdfs:subClassOf? ?type

and again with the newly discovered bindings.

?obj a ?X .


So shouldn't the final execution for the query:

 select ?type (count(?obj) as ?c) where {
   ?type rdfs:subClassOf owl:Thing .
   ?obj a/rdfs:subClassOf? ?type
 }

be

 select ?type (count(?obj) as ?c) where {
   ?type rdfs:subClassOf owl:Thing .
   ?X rdfs:subClassOf? ?type
   ?obj a ?X
 }


Claude

On Tue, Jun 5, 2018 at 10:48 AM, Andy Seaborne  wrote:

> It does - that's the "as reported" form.
>
> The better
>
>  where {
> ?type rdfs:subClassOf owl:Thing .
> ?X rdfs:subClassOf? ?type
> ?obj a ?X .
>  }
>
> is the expansion but the other way round (and which would be worse in
> other cases).
>
> You can try all this out from the command line with "qparse --print=opt"
> and online at sparql.org.
>
> Andy
>
>
> On 05/06/18 00:24, Claude Warren wrote:
>
>> Would it not make sense for the optimizer to convert to a form with the
>> intermediate variable?
>>
>> Claude
>>
>> On Mon, Jun 4, 2018, 2:37 PM Andy Seaborne  wrote:
>>
>> Clause,
>>>
>>> "qparse --print=opt"
>>>
>>> Because by expansion of "/" you get a cross product BGP and the path
>>> blocks the optimizer (it does not get moved):
>>>
>>> As reported:
>>>
>>>where {
>>>   ?type rdfs:subClassOf owl:Thing . # No vars in common
>>>   ?obj a ?X .   # No vars in common
>>>   ?X rdfs:subClassOf? ?type
>>>}
>>>
>>>  Andy
>>>
>>> On 04/06/18 18:20, Claude Warren wrote:
>>>
>>>> Andy,
>>>>
>>>> Why does the non path where clause work better?
>>>>
>>>> Claude
>>>>
>>>>
>>>
>>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Combined query results

2018-06-09 Thread Claude Warren
I can think of two ways to do this.  One is to use the bind statement to
bind the desired value.

prefix : <http://example.org/>

Select ?alice ?address where {
   ?alice :hasAddress ?address.
 bind(  "Alice" as ?alice )
}

The other is to use federated queries to do the join in one call.

On 08:08, Sat 9 Jun 2018 Samita Bai / PhD CS Scholar @ City Campus <
s...@iba.edu.pk> wrote:
>
> Dear Claude,
>
>
> I am working on two systems to answer the query so I need to split a
query into two parts, one part will be sent to one system and its results
would be sent to another system as a parametrized query.
>
>
>
> Regards,
>
> Samita Bai
>
> 
> From: Claude Warren 
> Sent: 08 June 2018 22:26:55
> To: users@jena.apache.org
> Subject: Re: Combined query results
>
> How about:
>
> prefix : <http://example.org/>
>
> Select ?alice ?address where {
>?alice :hasAddress ?address;
>   :hasName "Alice"
> }
>
>
>
>
> On Thu, Jun 7, 2018, 12:36 AM Samita Bai / PhD CS Scholar @ City Campus <
> s...@iba.edu.pk> wrote:
>
> > I need some help with the following code. I am using parameterized
sparql
> > string and I want to display combined results of both queries how can I
do
> > that. Any help or suggestion would be truly appreciated.
> >
> >
> >
> > final static String filename = "/home/samita/turtleContent.ttl";
> >
> >  static Model model= null;
> >
> >
> > final static String turtleContent = "" +
> > "@prefix : <http://example.org/> .\n" +
> > "\n" +
> > ":alice :hasName \"Alice\" .\n" +
> > ":alice :hasAddress \"4222 Clinton Way\" .\n" +
> > ":herman :hasName \"Herman\".\n" +
> > ":herman :hasAddress \"1313 Mockingbird Lane\" .\n" +
> > ":DrWho :hasAddress \"The TARDIS\"" +
> > "";
> >
> >
> >// Read the model from the turtle content
> >final static Model model = ModelFactory.createDefaultModel()
> >// .read( new ByteArrayInputStream(
turtleContent.getBytes()),
> > null, "TURTLE" );
> >
> >
> > final static String findAlice = "prefix : <http://example.org/>" +
> > "select ?alice where {" +
> > "?alice :hasName \"Alice\" }"  ;
> >
> > final static String findAliceAddress = "prefix : <
http://example.org/>"
> > +
> > "select ?address where {" +
> > " ?alice :hasAddress ?address }";
> >
> >  public static void useParameterizedSPARQLString() {
> > System.out.println( "== useParameterizedSPARQLString ==" );
> > // execute the query that finds a (single) binding for ?alice.
> > Then create
> > // a query solution map containing those results.
> > final ResultSet aliceResults = QueryExecutionFactory.create(
> > findAlice, model ).execSelect();
> > final QuerySolutionMap map = new QuerySolutionMap();
> > map.addAll( aliceResults.next() );
> > // Create a ParameterizedSparqlString from the findAliceAddress
> > query string (if this
> > // approach were taken, findAliceAddress could actually *be* a
> > Param.SparqlString, of
> > // course).
> > final ParameterizedSparqlString pss = new
> > ParameterizedSparqlString( findAliceAddress );
> > System.out.println( pss.toString() );
> > pss.setParams( map );
> > System.out.println( pss.toString() );
> > // execute the query and show the results
> > ResultSetFormatter.out( QueryExecutionFactory.create(
> > pss.toString(), model ).execSelect() );
> > }
> >
> > The results I get is:
> >
> > --
> > | address|
> > ==
> > | "4222 Clinton Way" |
> > --
> >
> > Is there any way to display the variable ?alice also
> >
> > Like
> >
> > alice   address
> >
> > http://example.org/alice"4222 Clinton Way"<
> > http://example.org/alice>
> >
> >
> >
> >
> >
> > P : Please consider the environment before printing

Re: Combined query results

2018-06-08 Thread Claude Warren
How about:

prefix : 

Select ?alice ?address where {
   ?alice :hasAddress ?address;
  :hasName "Alice"
}




On Thu, Jun 7, 2018, 12:36 AM Samita Bai / PhD CS Scholar @ City Campus <
s...@iba.edu.pk> wrote:

> I need some help with the following code. I am using parameterized sparql
> string and I want to display combined results of both queries how can I do
> that. Any help or suggestion would be truly appreciated.
>
>
>
> final static String filename = "/home/samita/turtleContent.ttl";
>
>  static Model model= null;
>
>
> final static String turtleContent = "" +
> "@prefix :  .\n" +
> "\n" +
> ":alice :hasName \"Alice\" .\n" +
> ":alice :hasAddress \"4222 Clinton Way\" .\n" +
> ":herman :hasName \"Herman\".\n" +
> ":herman :hasAddress \"1313 Mockingbird Lane\" .\n" +
> ":DrWho :hasAddress \"The TARDIS\"" +
> "";
>
>
>// Read the model from the turtle content
>final static Model model = ModelFactory.createDefaultModel()
>// .read( new ByteArrayInputStream( turtleContent.getBytes()),
> null, "TURTLE" );
>
>
> final static String findAlice = "prefix : " +
> "select ?alice where {" +
> "?alice :hasName \"Alice\" }"  ;
>
> final static String findAliceAddress = "prefix : "
> +
> "select ?address where {" +
> " ?alice :hasAddress ?address }";
>
>  public static void useParameterizedSPARQLString() {
> System.out.println( "== useParameterizedSPARQLString ==" );
> // execute the query that finds a (single) binding for ?alice.
> Then create
> // a query solution map containing those results.
> final ResultSet aliceResults = QueryExecutionFactory.create(
> findAlice, model ).execSelect();
> final QuerySolutionMap map = new QuerySolutionMap();
> map.addAll( aliceResults.next() );
> // Create a ParameterizedSparqlString from the findAliceAddress
> query string (if this
> // approach were taken, findAliceAddress could actually *be* a
> Param.SparqlString, of
> // course).
> final ParameterizedSparqlString pss = new
> ParameterizedSparqlString( findAliceAddress );
> System.out.println( pss.toString() );
> pss.setParams( map );
> System.out.println( pss.toString() );
> // execute the query and show the results
> ResultSetFormatter.out( QueryExecutionFactory.create(
> pss.toString(), model ).execSelect() );
> }
>
> The results I get is:
>
> --
> | address|
> ==
> | "4222 Clinton Way" |
> --
>
> Is there any way to display the variable ?alice also
>
> Like
>
> alice   address
>
> http://example.org/alice"4222 Clinton Way"<
> http://example.org/alice>
>
>
>
>
>
> P : Please consider the environment before printing this e-mail
>
> 
>
> CONFIDENTIALITY / DISCLAIMER NOTICE: This e-mail and any attachments may
> contain confidential and privileged information. If you are not the
> intended recipient, please notify the sender immediately by return e-mail,
> delete this e-mail and destroy any copies. Any dissemination or use of this
> information by a person other than the intended recipient is unauthorized
> and may be illegal.
>
> 
>


BGP, path, and optimization. Was: Jena Usage Report: Performance, bugs and feature requests

2018-06-04 Thread Claude Warren
Would it not make sense for the optimizer to convert to a form with the
intermediate variable?

Claude

On Mon, Jun 4, 2018, 2:37 PM Andy Seaborne  wrote:

> Clause,
>
> "qparse --print=opt"
>
> Because by expansion of "/" you get a cross product BGP and the path
> blocks the optimizer (it does not get moved):
>
> As reported:
>
>   where {
>  ?type rdfs:subClassOf owl:Thing . # No vars in common
>  ?obj a ?X .   # No vars in common
>  ?X rdfs:subClassOf? ?type
>   }
>
> Andy
>
> On 04/06/18 18:20, Claude Warren wrote:
> > Andy,
> >
> > Why does the non path where clause work better?
> >
> > Claude
> >
>


Re: Jena Usage Report: Performance, bugs and feature requests

2018-06-04 Thread Claude Warren
Andy,

Why does the non path where clause work better?

Claude


Re: Jena Usage Report: Performance, bugs and feature requests

2018-05-30 Thread Claude Warren
Just a quick note.  There is a cassandra implementation but no work has
been done on performance tuning.

On a second note.  I did some work using bloom filters to do partitioning
that allows adding partitions on demand.  Should work for triple store
partitioning as well.

Claude

On Wed, May 30, 2018, 8:43 AM Siddhesh Rane  wrote:

> For my undergraduate project I used Fuseki 3.6.0 server backed by a TDB
> dataset.
> 3 unique SPARQL queries were made against the server by 6 nodes in
> a Spark cluster returning a total of 150 million triples.
> As I used DBpedia's dataset, nearly all the entities from Wikipedia
> were covered, so my experiment is somewhat like an exhaustive test.
>
> I write as someone quite new to Jena and SPARQL itself so I may have
> faced problems for doing things the wrong way, or not knowing better
> solutions.
> Although Fuseki was a critical component in my pipeline, I could not
> spend much time on learning it properly, so kindly forgive any
> ignorance on my part.
> I hope my experience will help the developers to know the ways in
> which at least newcomers are using this software.
>
>
> DATA INGESTION
>
> This was the most tedious part about using Jena.
> The ability to create a TDB database and upload data to it, all from
> the browser, is a really nice feature. The difficult part is that the
> memory required to do so is proportional to the size of the data being
> uploaded. The largest file that I was trying to upload contained 158M
> triples (24GB uncompressed, 1.5GB bz2 compressed) and it was
> frequently running out of memory. I had to set Fuseki to -Xmx32g and
> only then did it work. Command line tools faced the same problem.
>
> Another thing is that both the web interface and command line tools
> optionally accept gzip files, but not bzip2 wheareas bzip2 is used by
> both Wikipedia and DBpedia for their data dumps.
> I tried to work around the issue by `bzcat file.ttl.bz2 | gzip >
> named-pipe` and then using the named-pipe for data ingestion but that
> did not work.
>
> I finally ended up using `tdbloader2` which works with constant memory
> and, as I read somewhere on the mailing list, produces the smallest
> size database.
> There might be some SPARQL way for inserting data in batches and I
> probably could have scripted that but I had a project to complete and
> so went with what appeared to be the most straightforward way of doing
> things.
>
> Performance of tdbloader2:
> On my 2017 Spectre x360 laptop with 16GB RAM, dual core i7-7500U cpu
> and 512GB SSD
> Phase I: 199,597,131 tuples; 2,941.31 seconds ; 67,860.02 tuples/sec
> Totat time 4609s
>
> I wanted to control what indexes are generated because I knew the
> access pattern of my SPARQL queries and also wanted smaller DB size.
> I think there are toggles to decide what indexes are generated but I
> did not try to search much.
> Controlling the indexes from tdbloader2 itself would be a great
> option. If this feature already exists, please let me know.
>
> Another point of confusion is the version of the backing TDB database.
> The `bin` folder contains tdb commands for both v1 and v2 but I'm not
> sure what version does Fuseki use when I create a persistent store.
> The database config file `config.ttl` does not mention any version.
> I would appreciate if someone could clear up this confusion for me.
>
> TL;DR please support bzip2, reading from pipes and constant memory
> loading operations.
>
>
> DATABASE PERFORMANCE, REPLICATION and SHARDING
>
> My project used Spark to distribute the load among a cluster of
> machines. The input data was all the articles in Wikipedia. Each
> partition of the data would contain about 250 articles. The first
> SPARQL query was to DESCRIBE these articles. A subsequent CONSTRUCT
> query would fetch the labels for all the object resources in the model
> returned by the first query. There were 44 cores in the cluster so at
> any time 44 partitions would generate 44*2=88 SPARQL queries. The
> DESCRIBE query would run in milliseconds whereas the CONSTRUCT query
> would take 1-2 seconds, because of the random access nature.
> Benchmarks were not comprehensive, just observation of log output.
>
> I got this performance when the entire database was resident in RAM,
> as reported by `vmtouch` (https://hoytech.com/vmtouch/)
> Without complete memory mapping, the performance would degrade to
> 500-1000 seconds per CONSTRUCT query. In my case the db was 16-19GB in
> size so it could be `vmtouched` in RAM on a 32GB RAM 8 core Xeon
> machine.
>
> To increase performance further I replicated the db on an identical
> machine and load balanced queries between the two machines. The
> execution time of my entire Spark app went down from 2 hours to 1
> hour. A recent thread on this list talks about high availability and
> replication. You can just assign different threads to query different
> replicas of the db with fallback on the other and that would be
> sufficient in most cases

Re: linked data and URLs

2018-05-22 Thread Claude Warren
I am  thinking that perhaps the easiest solution will be to create a Node
rewriter and filter all the Fuseki results through it to rewrite the
nodes.  I will also have to rewrite all the queries going in but it might
be doable.

On Tue, May 22, 2018 at 3:15 PM, Claude Warren  wrote:

> Can not use blank nodes at the start as they cause significant problems
> later when trying to do deletes and such via update protocols in Fuseki.
>
> Small models are built locally and sent as update requests to the Fuseki
> server so methods that require access to do renames will not be efficient
> as there is no efficient way to run them on the Fuseki server.
>
> On Tue, May 22, 2018 at 2:55 PM, Martynas Jusevičius <
> marty...@atomgraph.com> wrote:
>
>> Why generate URIs at all in the beginning, can't you use blank nodes?
>>
>> Rewriting URIs is generally a bad idea in a Linked Data setting. Make one
>> datasource canonical and let the other one deal with that. Or maybe you
>> can
>> configure your proxy in a way that hides the port number and you don't
>> need
>> the second version (just a guess).
>>
>> Also, you generate document URIs, and persons are not documents. Hash URIs
>> would probably be best for persons.
>>
>> If you choose to ignore the above, this method might help you:
>> https://jena.apache.org/documentation/javadoc/jena/org/
>> apache/jena/util/ResourceUtils.html#renameResource-org.
>> apache.jena.rdf.model.Resource-java.lang.String-
>>
>> On Tue, May 22, 2018 at 1:00 PM, Claude Warren  wrote:
>>
>> > I have what I think may  be a common problem and am looking for
>> suggested
>> > patterns and anti-patterns for a solution.
>> >
>> > For the sake of this example let's assume that the system described
>> creates
>> > FOAF records.
>> >
>> > == PROBLEM
>> >
>> > Backend:
>> >
>> > The backend system generates FOAF records but does not have any
>> information
>> > about where they will be stored/deployed.  So it generates records like
>> >
>> >  a foaf:Person ;
>> > foaf:name "Jimmy Wales" ;
>> > foaf:mbox <mailto:jwa...@bomis.com> ;
>> > foaf:homepage <http://www.jimmywales.com> ;
>> > foaf:nick "Jimbo" ;
>> > foaf:depiction <http://www.jimmywales.com/aus_img_small.jpg> ;
>> > foaf:interest <http://www.wikimedia.org> ;
>> > foaf:knows  .
>> >
>> >  a foaf:Person ;
>> > foaf:name "Angela Beesley" .
>> >
>> >
>> > This data is stored in a Fuseki based server.
>> >
>> > Frontend 1:
>> >
>> > The front end should take replace the  based URIs with
>> http
>> > based URIs that point to the frontend.  So assuming the frontend is at
>> > http://frontend:8080 and has a method to return RDF in turtle format
>> the
>> > RDF should look like
>> >
>> >  <http://frontend:8080/foaf1> a foaf:Person ;
>> >  foaf:name "Jimmy Wales" ;
>> >  foaf:mbox <mailto:jwa...@bomis.com> ;
>> >  foaf:homepage <http://www.jimmywales.com> ;
>> >  foaf:nick "Jimbo" ;
>> >  foaf:depiction <http://www.jimmywales.com/aus_img_small.jpg> ;
>> >  foaf:interest <http://www.wikimedia.org> ;
>> >  foaf:knows <http://frontend:8080/foaf2> .
>> >
>> >  <http://frontend:8080/foaf2> a foaf:Person ;
>> >  foaf:name "Angela Beesley" .
>> >
>> > Frontend 2:
>> >
>> > There is a second frontend with a different URL. http://frontend2:8080,
>> > frontend 1 and frontend 2.  Frontend 2 does not have access to frontend
>> 1
>> > (assume that there is a firewall that prohibits the access).   Frontend2
>> > should produce RDF like:
>> >
>> >  <http://frontend2:8080/foaf1> a foaf:Person ;
>> >  foaf:name "Jimmy Wales" ;
>> >  foaf:mbox <mailto:jwa...@bomis.com> ;
>> >  foaf:homepage <http://www.jimmywales.com> ;
>> >  foaf:nick "Jimbo" ;
>> >  foaf:depiction <http://www.jimmywales.com/aus_img_small.jpg> ;
>> >  foaf:interest <http://www.wikimedia.org> ;
>> >  foaf:knows <http://frontend2:8080/foaf2> .
>> >
>> >  <http://frontend2:8080/foaf2> a foaf:Person ;
>> >  foaf:name "Angela Beesley" .
>> >
>> > == Question
>> >
>> > How can I setup a system that will automatically convert one URI to
>> another
>> > without storing multiple copies of the data (e.g. not multiple
>> datasets).
>> > I have thought about using owl:sameAs and am wondering if there is a
>> > reasoner that will process it.
>> >
>> > Anyway, has anyone else come across this problem (I figure so) and does
>> > anyone have a possible solution?
>> >
>> >
>> >
>> >
>> > Thx,
>> > Claude
>> > --
>> > I like: Like Like - The likeliest place on the web
>> > <http://like-like.xenei.com>
>> > LinkedIn: http://www.linkedin.com/in/claudewarren
>> >
>>
>
>
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: linked data and URLs

2018-05-22 Thread Claude Warren
Can not use blank nodes at the start as they cause significant problems
later when trying to do deletes and such via update protocols in Fuseki.

Small models are built locally and sent as update requests to the Fuseki
server so methods that require access to do renames will not be efficient
as there is no efficient way to run them on the Fuseki server.

On Tue, May 22, 2018 at 2:55 PM, Martynas Jusevičius  wrote:

> Why generate URIs at all in the beginning, can't you use blank nodes?
>
> Rewriting URIs is generally a bad idea in a Linked Data setting. Make one
> datasource canonical and let the other one deal with that. Or maybe you can
> configure your proxy in a way that hides the port number and you don't need
> the second version (just a guess).
>
> Also, you generate document URIs, and persons are not documents. Hash URIs
> would probably be best for persons.
>
> If you choose to ignore the above, this method might help you:
> https://jena.apache.org/documentation/javadoc/jena/org/apache/jena/util/
> ResourceUtils.html#renameResource-org.apache.jena.rdf.model.Resource-java.
> lang.String-
>
> On Tue, May 22, 2018 at 1:00 PM, Claude Warren  wrote:
>
> > I have what I think may  be a common problem and am looking for suggested
> > patterns and anti-patterns for a solution.
> >
> > For the sake of this example let's assume that the system described
> creates
> > FOAF records.
> >
> > == PROBLEM
> >
> > Backend:
> >
> > The backend system generates FOAF records but does not have any
> information
> > about where they will be stored/deployed.  So it generates records like
> >
> >  a foaf:Person ;
> > foaf:name "Jimmy Wales" ;
> > foaf:mbox <mailto:jwa...@bomis.com> ;
> > foaf:homepage <http://www.jimmywales.com> ;
> > foaf:nick "Jimbo" ;
> > foaf:depiction <http://www.jimmywales.com/aus_img_small.jpg> ;
> > foaf:interest <http://www.wikimedia.org> ;
> > foaf:knows  .
> >
> >  a foaf:Person ;
> > foaf:name "Angela Beesley" .
> >
> >
> > This data is stored in a Fuseki based server.
> >
> > Frontend 1:
> >
> > The front end should take replace the  based URIs with http
> > based URIs that point to the frontend.  So assuming the frontend is at
> > http://frontend:8080 and has a method to return RDF in turtle format the
> > RDF should look like
> >
> >  <http://frontend:8080/foaf1> a foaf:Person ;
> >  foaf:name "Jimmy Wales" ;
> >  foaf:mbox <mailto:jwa...@bomis.com> ;
> >  foaf:homepage <http://www.jimmywales.com> ;
> >  foaf:nick "Jimbo" ;
> >  foaf:depiction <http://www.jimmywales.com/aus_img_small.jpg> ;
> >  foaf:interest <http://www.wikimedia.org> ;
> >  foaf:knows <http://frontend:8080/foaf2> .
> >
> >  <http://frontend:8080/foaf2> a foaf:Person ;
> >  foaf:name "Angela Beesley" .
> >
> > Frontend 2:
> >
> > There is a second frontend with a different URL. http://frontend2:8080,
> > frontend 1 and frontend 2.  Frontend 2 does not have access to frontend 1
> > (assume that there is a firewall that prohibits the access).   Frontend2
> > should produce RDF like:
> >
> >  <http://frontend2:8080/foaf1> a foaf:Person ;
> >  foaf:name "Jimmy Wales" ;
> >  foaf:mbox <mailto:jwa...@bomis.com> ;
> >  foaf:homepage <http://www.jimmywales.com> ;
> >  foaf:nick "Jimbo" ;
> >  foaf:depiction <http://www.jimmywales.com/aus_img_small.jpg> ;
> >  foaf:interest <http://www.wikimedia.org> ;
> >  foaf:knows <http://frontend2:8080/foaf2> .
> >
> >  <http://frontend2:8080/foaf2> a foaf:Person ;
> >  foaf:name "Angela Beesley" .
> >
> > == Question
> >
> > How can I setup a system that will automatically convert one URI to
> another
> > without storing multiple copies of the data (e.g. not multiple datasets).
> > I have thought about using owl:sameAs and am wondering if there is a
> > reasoner that will process it.
> >
> > Anyway, has anyone else come across this problem (I figure so) and does
> > anyone have a possible solution?
> >
> >
> >
> >
> > Thx,
> > Claude
> > --
> > I like: Like Like - The likeliest place on the web
> > <http://like-like.xenei.com>
> > LinkedIn: http://www.linkedin.com/in/claudewarren
> >
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


linked data and URLs

2018-05-22 Thread Claude Warren
I have what I think may  be a common problem and am looking for suggested
patterns and anti-patterns for a solution.

For the sake of this example let's assume that the system described creates
FOAF records.

== PROBLEM

Backend:

The backend system generates FOAF records but does not have any information
about where they will be stored/deployed.  So it generates records like

 a foaf:Person ;
foaf:name "Jimmy Wales" ;
foaf:mbox  ;
foaf:homepage  ;
foaf:nick "Jimbo" ;
foaf:depiction  ;
foaf:interest  ;
foaf:knows  .

 a foaf:Person ;
foaf:name "Angela Beesley" .


This data is stored in a Fuseki based server.

Frontend 1:

The front end should take replace the  based URIs with http
based URIs that point to the frontend.  So assuming the frontend is at
http://frontend:8080 and has a method to return RDF in turtle format the
RDF should look like

  a foaf:Person ;
 foaf:name "Jimmy Wales" ;
 foaf:mbox  ;
 foaf:homepage  ;
 foaf:nick "Jimbo" ;
 foaf:depiction  ;
 foaf:interest  ;
 foaf:knows  .

  a foaf:Person ;
 foaf:name "Angela Beesley" .

Frontend 2:

There is a second frontend with a different URL. http://frontend2:8080,
frontend 1 and frontend 2.  Frontend 2 does not have access to frontend 1
(assume that there is a firewall that prohibits the access).   Frontend2
should produce RDF like:

  a foaf:Person ;
 foaf:name "Jimmy Wales" ;
 foaf:mbox  ;
 foaf:homepage  ;
 foaf:nick "Jimbo" ;
 foaf:depiction  ;
 foaf:interest  ;
 foaf:knows  .

  a foaf:Person ;
 foaf:name "Angela Beesley" .

== Question

How can I setup a system that will automatically convert one URI to another
without storing multiple copies of the data (e.g. not multiple datasets).
I have thought about using owl:sameAs and am wondering if there is a
reasoner that will process it.

Anyway, has anyone else come across this problem (I figure so) and does
anyone have a possible solution?




Thx,
Claude
-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Problem with understanding Jena Permissions

2018-05-15 Thread Claude Warren
Jena is driven by user contributions.  (Andy has a great phrase for this
but I don't recall what it is at the moment).  But if users want the
functionality and contribute it then Jena will have it.

Moving the permissions implementation up the stack (to datasets for
example) has been a goal of mine for some time but I have never had the
direct need nor the time to do it.  If you have the time and the
inclination I would help you with the development as much as I can.

Claude

On Mon, May 14, 2018 at 8:45 PM, katja.danilova94 <
katja.danilov...@gmail.com> wrote:

> Thanks for guidance, I will do it this way then.
> And the future plan is to create a type of secured dataset in Fuseki so
> that all incoming and outgoing models are secured and checked automatically?
>
>
>
> От: Claude Warren  Дата: 14.05.18  21:43  (GMT+02:00)
> Кому: users@jena.apache.org Тема: Re: Problem with understanding Jena
> Permissions
> Permissions were originally designed to work outside of Fuseki and still
> does.  I often use them to create read only models.
>
> The Fuseki interface was originally intended to secure existing models.
> However, as I mentioned before it should be possible to have the system
> generate secured models on creation in Fuseki, it just hasn't been done
> yet.
>
> Claude
>
>
>
> On Mon, May 14, 2018 at 7:13 PM, Ekaterina Danilova <
> katja.danilov...@gmail.com> wrote:
>
> > Thank you for your reply,
> >
> > One more way might be implementing the SecurityEvaluator at the
> application
> > side and creating secured models there. It should work quite easily, but
> I
> > am not sure it is best solution. Is the Permissions package intended to
> be
> > used only as addition to Fuseki?
> >
> > And if Permissions are originally supposed to be used only with Fuseki,
> > then atm the main way how it is used is like in the example below -
> loading
> > data through Assembler straight into secured model?
> >
> > my:baseModel rdf:type ja:MemoryModel;
> > ja:content [ja:externalContent ]
> > .
> >
> > my:securedModel rdf:type sec:Model ;
> >     perm:baseModel my:baseModel ;
> > ja:modelName "https://example.org/securedModel"; ;
> > perm:evaluatorImpl my:secEvaluator .
> >
> >
> >
> >
> >
> >
> >
> > 2018-05-11 17:06 GMT+03:00 Claude Warren :
> >
> > > The permissions in your example are attached to the model called
> > > my:secModel.
> > >
> > > Basically you have the graph and it you access it with "using" or
> "from"
> > > statements the evaluator will be called.
> > >
> > > It is possible to make the model the default model for fuseki queries
> but
> > > that is not really what you want.
> > >
> > > What you want is the ability to create new models and have them be
> > > recognized as secured models.  This has not been implemented.  It might
> > be
> > > doable as a secured dataset (not implemented) or it may require other
> > work
> > > to ensure that the models are correctly created as secured models. (not
> > > sure how this would work off the top of my head).
> > >
> > > Claude
> > >
> > > On Fri, May 11, 2018 at 2:59 PM, Ekaterina Danilova <
> > > katja.danilov...@gmail.com> wrote:
> > >
> > > > Hello!
> > > > Yes, I tried to modify the config.ttl accoridng to the guide and it
> > looks
> > > > this way:
> > > >
> > > > PREFIX :<#>
> > > > PREFIX fuseki:  <http://jena.apache.org/fuseki#>
> > > > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
> > > > PREFIX perm:<http://apache.org/jena/permissions/Assembler#>
> > > > PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
> > > > PREFIX tdb2:<http://jena.apache.org/2016/tdb#>
> > > > PREFIX my:  <http://example.org/#>
> > > > PREFIX sec: <http://apache.org/jena/permission/Assembler#Model>
> > > >
> > > > [] perm:loadClass"org.apache.jena.permissions.SecuredAssembler"
> .
> > > >  sec:Model rdfs:subClassOf perm:NamedModel .
> > > >
> > > >  sec:evaluator rdfs:domain sec:Model ;
> > > >rdfs:range sec:Evaluator .
> > > >
> > > >  my:secModel a sec:Model ;
> > > > sec:baseModel my:baseModel ;
> > > > perm:modelName "http://example.com/securedModel"

Re: Problem with understanding Jena Permissions

2018-05-14 Thread Claude Warren
Permissions were originally designed to work outside of Fuseki and still
does.  I often use them to create read only models.

The Fuseki interface was originally intended to secure existing models.
However, as I mentioned before it should be possible to have the system
generate secured models on creation in Fuseki, it just hasn't been done yet.

Claude



On Mon, May 14, 2018 at 7:13 PM, Ekaterina Danilova <
katja.danilov...@gmail.com> wrote:

> Thank you for your reply,
>
> One more way might be implementing the SecurityEvaluator at the application
> side and creating secured models there. It should work quite easily, but I
> am not sure it is best solution. Is the Permissions package intended to be
> used only as addition to Fuseki?
>
> And if Permissions are originally supposed to be used only with Fuseki,
> then atm the main way how it is used is like in the example below - loading
> data through Assembler straight into secured model?
>
> my:baseModel rdf:type ja:MemoryModel;
> ja:content [ja:externalContent ]
> .
>
> my:securedModel rdf:type sec:Model ;
> perm:baseModel my:baseModel ;
> ja:modelName "https://example.org/securedModel"; ;
> perm:evaluatorImpl my:secEvaluator .
>
>
>
>
>
>
>
> 2018-05-11 17:06 GMT+03:00 Claude Warren :
>
> > The permissions in your example are attached to the model called
> > my:secModel.
> >
> > Basically you have the graph and it you access it with "using" or "from"
> > statements the evaluator will be called.
> >
> > It is possible to make the model the default model for fuseki queries but
> > that is not really what you want.
> >
> > What you want is the ability to create new models and have them be
> > recognized as secured models.  This has not been implemented.  It might
> be
> > doable as a secured dataset (not implemented) or it may require other
> work
> > to ensure that the models are correctly created as secured models. (not
> > sure how this would work off the top of my head).
> >
> > Claude
> >
> > On Fri, May 11, 2018 at 2:59 PM, Ekaterina Danilova <
> > katja.danilov...@gmail.com> wrote:
> >
> > > Hello!
> > > Yes, I tried to modify the config.ttl accoridng to the guide and it
> looks
> > > this way:
> > >
> > > PREFIX :<#>
> > > PREFIX fuseki:  <http://jena.apache.org/fuseki#>
> > > PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
> > > PREFIX perm:<http://apache.org/jena/permissions/Assembler#>
> > > PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
> > > PREFIX tdb2:<http://jena.apache.org/2016/tdb#>
> > > PREFIX my:  <http://example.org/#>
> > > PREFIX sec: <http://apache.org/jena/permission/Assembler#Model>
> > >
> > > [] perm:loadClass"org.apache.jena.permissions.SecuredAssembler" .
> > >  sec:Model rdfs:subClassOf perm:NamedModel .
> > >
> > >  sec:evaluator rdfs:domain sec:Model ;
> > >rdfs:range sec:Evaluator .
> > >
> > >  my:secModel a sec:Model ;
> > > sec:baseModel my:baseModel ;
> > > perm:modelName "http://example.com/securedModel"; ;
> > > sec:evaluatorImpl my:myEvaluator;
> > > .
> > >
> > > my:myEvaluator a sec:Evaluator ;
> > > perm:args [
> > > rdf:_1 my:baseModel ;
> > > ] ;
> > > perm:evaluatorClass
> > > "org.apache.jena.permissions.example.ShiroExampleEvaluator" .
> > >
> > > [] rdf:type fuseki:Server ;
> > >fuseki:services (
> > >  <#service_tdb2>
> > > //the list of services omitted
> > >
> > > And the models are uploaded from the application with :
> > >
> > > DatasetAccessor accessor = DatasetAccessorFactory.createHTTP();
> > > accessor.putModel(name, model);
> > >
> > > So, with these configurations Fuseki doesn't do anything with the
> models.
> > > Am I missing something?
> > >
> > > Thank you for help.
> > >
> > >
> > > 2018-05-11 16:11 GMT+03:00 Claude Warren :
> > >
> > > > You don't say if you have modified the default Fuseki configuration
> but
> > > > what you will need to do is to modify the configuration file so that
> > the
> > > > models that are created using the SecuredAssembler.
> > > > (
> > > > http://jena.apache.org/do

Re: Problem with understanding Jena Permissions

2018-05-11 Thread Claude Warren
The permissions in your example are attached to the model called
my:secModel.

Basically you have the graph and it you access it with "using" or "from"
statements the evaluator will be called.

It is possible to make the model the default model for fuseki queries but
that is not really what you want.

What you want is the ability to create new models and have them be
recognized as secured models.  This has not been implemented.  It might be
doable as a secured dataset (not implemented) or it may require other work
to ensure that the models are correctly created as secured models. (not
sure how this would work off the top of my head).

Claude

On Fri, May 11, 2018 at 2:59 PM, Ekaterina Danilova <
katja.danilov...@gmail.com> wrote:

> Hello!
> Yes, I tried to modify the config.ttl accoridng to the guide and it looks
> this way:
>
> PREFIX :<#>
> PREFIX fuseki:  <http://jena.apache.org/fuseki#>
> PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
> PREFIX perm:<http://apache.org/jena/permissions/Assembler#>
> PREFIX rdfs:<http://www.w3.org/2000/01/rdf-schema#>
> PREFIX tdb2:<http://jena.apache.org/2016/tdb#>
> PREFIX my:  <http://example.org/#>
> PREFIX sec: <http://apache.org/jena/permission/Assembler#Model>
>
> [] perm:loadClass"org.apache.jena.permissions.SecuredAssembler" .
>  sec:Model rdfs:subClassOf perm:NamedModel .
>
>  sec:evaluator rdfs:domain sec:Model ;
>rdfs:range sec:Evaluator .
>
>  my:secModel a sec:Model ;
> sec:baseModel my:baseModel ;
> perm:modelName "http://example.com/securedModel"; ;
> sec:evaluatorImpl my:myEvaluator;
> .
>
> my:myEvaluator a sec:Evaluator ;
> perm:args [
> rdf:_1 my:baseModel ;
> ] ;
> perm:evaluatorClass
> "org.apache.jena.permissions.example.ShiroExampleEvaluator" .
>
> [] rdf:type fuseki:Server ;
>fuseki:services (
>  <#service_tdb2>
> //the list of services omitted
>
> And the models are uploaded from the application with :
>
> DatasetAccessor accessor = DatasetAccessorFactory.createHTTP();
> accessor.putModel(name, model);
>
> So, with these configurations Fuseki doesn't do anything with the models.
> Am I missing something?
>
> Thank you for help.
>
>
> 2018-05-11 16:11 GMT+03:00 Claude Warren :
>
> > You don't say if you have modified the default Fuseki configuration but
> > what you will need to do is to modify the configuration file so that the
> > models that are created using the SecuredAssembler.
> > (
> > http://jena.apache.org/documentation/javadoc/
> permissions/org/apache/jena/
> > permissions/SecuredAssembler.html).
> > This process will hook your security evaluator to the models.
> >
> > Then requests will be filtered automatically.  Your security evaluator
> will
> > be called with the name of the model as specified in the
> SecuredAssembler.
> >
> > I don;t think anyone has implemented a mechanism to allow uploading of
> > graphs/models into secure graphs.  It probably could be done.  If you are
> > interested in attempting such let me know and we can outline how to do
> it.
> >
> > Claude
> >
> > On Fri, May 11, 2018 at 1:41 PM, Ekaterina Danilova <
> > katja.danilov...@gmail.com> wrote:
> >
> > > Hello!
> > > I have a problem with understanding Jena permissions.
> > >
> > > I have an application which creates named graphs, uploads and reads
> those
> > > through Fuseki. I would like to add some security and create different
> > > access rules for different users etc. As the documentation (
> > > https://jena.apache.org/documentation/permissions/) says, it can be
> done
> > > with my own Security Evaluator implementation.
> > >
> > > What I don't understand is where and how exactly permissions should be
> > > added. Should they be only at Fuseki side? If so, then how can Fuseki
> > > understand to process each model as secured model? If I wish to create
> > > secured model at the side of application, then I have to use this
> method:
> > > Factory.getInstance( SecurityEvaluator, String, Model );
> > > which requires the SecurityEvaluator at the application side too. But
> if
> > I
> > > add it there, then there is no sense in having the security evaluator
> at
> > > Fuseki side.
> > >
> > > My problem is that even though I added the permissions jar with my own
> > > SecurityEvaluator (a bit modified ShiroExampleEvaluator) to Fuseki
>

Re: Problem with understanding Jena Permissions

2018-05-11 Thread Claude Warren
You don't say if you have modified the default Fuseki configuration but
what you will need to do is to modify the configuration file so that the
models that are created using the SecuredAssembler.
(
http://jena.apache.org/documentation/javadoc/permissions/org/apache/jena/permissions/SecuredAssembler.html).
This process will hook your security evaluator to the models.

Then requests will be filtered automatically.  Your security evaluator will
be called with the name of the model as specified in the SecuredAssembler.

I don;t think anyone has implemented a mechanism to allow uploading of
graphs/models into secure graphs.  It probably could be done.  If you are
interested in attempting such let me know and we can outline how to do it.

Claude

On Fri, May 11, 2018 at 1:41 PM, Ekaterina Danilova <
katja.danilov...@gmail.com> wrote:

> Hello!
> I have a problem with understanding Jena permissions.
>
> I have an application which creates named graphs, uploads and reads those
> through Fuseki. I would like to add some security and create different
> access rules for different users etc. As the documentation (
> https://jena.apache.org/documentation/permissions/) says, it can be done
> with my own Security Evaluator implementation.
>
> What I don't understand is where and how exactly permissions should be
> added. Should they be only at Fuseki side? If so, then how can Fuseki
> understand to process each model as secured model? If I wish to create
> secured model at the side of application, then I have to use this method:
> Factory.getInstance( SecurityEvaluator, String, Model );
> which requires the SecurityEvaluator at the application side too. But if I
> add it there, then there is no sense in having the security evaluator at
> Fuseki side.
>
> My problem is that even though I added the permissions jar with my own
> SecurityEvaluator (a bit modified ShiroExampleEvaluator) to Fuseki
> correctly (with this example
> https://jena.apache.org/documentation/permissions/example.html), I cannot
> get it to process data through it. Fuseki is not seeing the incoming data
> as secured models.
>
> So, in short, the question is - how to set up Fuseki in such way, that it
> would see all incoming models as secured models and check the access level
> for those?
> And if it is impossible, what is the right way to add the permissions?
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Parameterized queries

2018-03-26 Thread Claude Warren
I don't know about escaping the values but there are 2 constructs that
might help you.


One is VALUES https://www.w3.org/TR/sparql11-query/#inline-data

query="select * where { VALUES ?sbj { 
> } $sbj a []  }"



and the other is IN https://www.w3.org/TR/sparql11-query/#func-in


query="select * where { $sbj a []. FILTER( $sbj in ( <
http://example.org/Alice>, > )}"


I suspect VALUES will serve you better as it can handle multiple parameters
and I think it is slightly more efficient in filtering the data stream
(though I could be wrong here).

Claude


On Mon, Mar 26, 2018 at 8:30 AM, Laura Morales  wrote:

> Is it possible to send a parameterized query to fuseki? I mean sending a
> query along with a list of parameters, more or less like this
>
> format=json
> query="select * where { $sbj a [] }"
> sbj=""
>
> similar to SQL parameterized queries, where parameters are automatically
> escaped in order to prevent injection attacks.
>
> I know this would be more of a client issue than server, but I can't find
> any library that does this, so I was wondering if Fuseki has anything like
> this built in. In particular, I'd need a library for Python. Do you guys
> know any by chance?
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Example code

2018-03-19 Thread Claude Warren
This may not be a Fuseki/Jena problem directly but it is one I have had to
deal with before.  Being a backend architect/developer I try to stay away
from the UI as much as possible.  However, there comes a time when someone
some where says "So how do I get the data out".  Right now on my €job I
have that question posed and I have an architect who wants to create a
bunch of REST service calls that do nothing but run standard parametrized
queries against Fuseki and present the results to the UI developers.  I
don't think this is the best solution.

I think this is a valid question, and I think it is a tough one to solve.

The shortest answer is probably to look at the Fuseki web app code and see
how it is done  there.

The longer answer is that front end developers will have to understand how
RDF works and some standard vocabularies.

For example: if a UI developer wants to get all the properties of a cow <
http://example.com/Cow> s/he needs to know to query with either:

Select distinct ?property WHERE
?a rdf:type  
?a ?property ?value .

or:

Select distinct ?property WHERE
?property rdfs:domain .

depending on what the UI requirements are.

Next the JSON-LD format, while it is a valid json format, is not what most
UI developers have seen.  A bit of help explaining how to understand what
JSON-LD is saying would go a long way.

Now with all that being said, not all of it belongs in Jena documentation.
But then most of the RDF documentation sites I have seen are not well
maintained, so there is no place to go for good samples.

So, valid and important question, but I fear I don't have much in the way
of an answer as of yet.  I will be discussing this at work and will try to
post information as I develop it.

Claude

On Mon, Mar 19, 2018 at 11:59 AM, Laura Morales  wrote:

> TBH I think you're writing to the wrong mailing list. You should write to
> the  mailing list, and ask them to provide
> example code to use the UI with a Fuseki backend instead of MySQL.
>
>
>
>
> Sent: Monday, March 19, 2018 at 12:33 PM
> From: "David Moss" 
> To: users@jena.apache.org
> Subject: Re: Example code
>
> On 19/3/18, 5:39 pm, "Lorenz Buehmann"  leipzig.de> wrote:
>
> >Well, isn't that the task of the UI logic? You get JSON-LD and now you
> >can visualize it. I don't really see the problem here?
>
> Therein lies the problem. I'm sure _you_ know how to do it.
> How does someone without experience in integrating Jena with UI know how
> to do it?
>
> >dataset -> query -> data -> visualization (table, graph, etc.)
>
> Those are indeed a set of steps. Do you have an example of how to do that
> in java code and load the result into a combobox for selection in a UI?
>
> >Why should this be an example on the Apache Jena documentation?
>
> It shouldn't. It should be stored separately from the Apache Jena
> documentation.
> The Javadoc is for how Jena works internally and how to maintain Jena
> itself.
> I'm talking about examples to help people use Jena in the kind of
> applications people want to use.
>
> One of the dilemmas I have regarding Jena is how to store query results
> locally.
> I could use Jena to query an endpoint, iterate through the ResultSet and
> build POJOs or Tables.
> Or is it better to keep the results in a Model and query that again to
> build UI components?
> Or maybe I should ditch the fancy Jena objects and just get a result as a
> JSON object and work with that?
>
> These are all possibilities, but how is it actually being done in real
> projects? Where are the examples?
>
> A reply like "dataset -> query -> data -> visualization (table, graph,
> etc.)" is very glib, but it doesn't actually have anything in the way of
> example code that can be used by people new to Jena in their own real-world
> programs. That is what I see as missing.
>
>
> DM
>
>
>
>
>
>
>
>
>
>
>
>
> On 19.03.2018 08:31, David Moss wrote:
> > That is certainly a way to get data from a SPARQL endpoint to display in
> a terminal window.
> > It does not store it locally or put it into a user-friendly GUI control
> however.
> > Looks like I might have to roll my own and face the music publicly if
> I'm doing it wrong.
> >
> > I think real-world examples of how to use Jena in a user friendly
> program are essential to advancing the semantic web.
> > Thanks for considering my question.
> >
> > DM
> >
> > On 19/3/18, 4:19 pm, "Laura Morales"  wrote:
> >
> > As far as I know the only way to query a Jena remotely is via HTTP. So,
> install Fuseki and then send a traditional HTTP GET/POST request to it with
> two parameters, "query" and "format". For example
> >
> > $ curl --data "format=json&query=..." http://your-endpoint.org
> >
> >
> >
> > Sent: Sunday, March 18, 2018 at 11:26 PM
> > From: "David Moss" 
> > To: users@jena.apache.org
> > Subject: Re: Example code
> >
> > On 18/3/18, 6:24 pm, "Laura Morales"  wrote:
> >
> > >> For example, when using data from a SPARQL endpoint, what is the
> acc

Re: client/server communication protocol

2018-03-13 Thread Claude Warren
If we take this to the Java environment for a moment, you can read/write
directly to the jena database using the java classes.  I suppose you could
set this up to call it from PHP/Python.

HOWEVER, and it is a big however, there are synchronization (read/write)
issues with doing this.  Fuseki has the code to ensure that the
synchronization is handled correctly.

In addition, I have found recently that the RDFConnection class makes it
much easier to write code that will run against both a local and a remote
system.

My suggestion is that if you are going to do this, explore the Fuseki code
and understand how it does locking etc.  It has been awhile since I was
down in that code and what I remember may have changed.  YMMV.

Claude

On Tue, Mar 13, 2018 at 7:35 AM, Laura Morales  wrote:

> I forgot to mention that I'm not looking at this from the perspective of a
> user who wants to use a public endpoint. I'm looking at this from the
> perspective of a developer making a website and using Jena/Fuseki as a
> non-public backend database.
>
>
>
>
> Sent: Tuesday, March 13, 2018 at 8:29 AM
> From: "Laura Morales" 
> To: users@jena.apache.org
> Cc: users@jena.apache.org
> Subject: Re: client/server communication protocol
> Am not saying one is better or worse than the other, I'm merely trying to
> understand. If I understand correctly Fuseki is responsible for handling
> connections, after then it passes my query to Jena which essentially will
> parse my query and retrieve the data from a memory mapped file (TDB).
> Since MySQL/Postgres use a custom binary protocol, I'm simply asking
> myself if HTTP adds too much overhead and latency (and therefore is
> significantly slower when dealing with a lot of requests) compared to a
> custom protocol programmed on a lower level socket.
>
>
>
>
> Sent: Tuesday, March 13, 2018 at 8:11 AM
> From: "Lorenz Buehmann" 
> To: users@jena.apache.org
> Subject: Re: client/server communication protocol
> Well, Fuseki is exactly the HTTP layer on top of Jena. Without Fuseki,
> which protocol do you want to use to communicate with Jena? The SPARQL
> protocol [1] perfectly standardizes the communication via HTTP. Without
> Fuseki, who should do the HTTP handling? Clearly, you could setup your
> own Java server and do all the communication by yourself, e.g. using low
> level sockets etc. - whether this makes sense, I don't know. I'd always
> prefer standards, especially if you already have something like Fuseki
> which does all the connection handling.
>
>
> [1] https://www.w3.org/TR/sparql11-http-rdf-update/
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Jena Property Table

2018-02-28 Thread Claude Warren
This sounds like a data serialization question.   Would it not be easier to
write out the triples in turtle and read them in by normal means?

On Feb 28, 2018 6:36 AM, "Samita Bai / PhD CS Scholar @ City Campus" <
s...@iba.edu.pk> wrote:

> Yes Andy, I am also thinking to switch to TDB, cheers.
>
>
> I felt so good you helped me.
>
>
> Now I can convince my supervisor 😊
>
>
> Regards,
>
> Samita Bai
>
> 
> From: Andy Seaborne 
> Sent: 28 February 2018 16:24:21
> To: users@jena.apache.org
> Subject: Re: Jena Property Table
>
> I read the javadoc :-)
>
> """
> Each Row of the PropertyTable has an unique rowKey Node of the subject
> (or s for short).
> """
>
> I'm not convinced all the implementations follow this but if they don't,
> I do not see how it is useable.
>
> A direct implementation as a graph looks just as easy.
>
> If you don't have a lot or data and so don't need the compactness of a
> custom implementation, use Modelfactory.createDefaultModel (or the graph
> version Factory.createGraphMem( )). Because Graph is an interface, you
> can replace the default implementation later.
>
> If you are going to be doing querying and updates, then DatasetFactory
> and use a transactional dataset (or TDB for a persistent one).
>
> It depends what's important to your project - if this is a necessary
> implementation component with no performance implications, do whatever
> is the least work.
>
> Could you say something about your project and what are you trying to 0o?
>  Andy
>
> On 28/02/18 11:07, Samita Bai  / PhD CS Scholar @ City Campus wrote:
> > Thank you Andy for such a detailed reply. I am not getting this 'one
> subject only'. Is it not possible to add more subjects in a single property
> table?
> >
> > And you are suggesting me to extend GraphBase right? Ok I will work it
> now.
> >
> >
> >
> > Thank you so much.
> >
> > 
> > From: Andy Seaborne 
> > Sent: 28 February 2018 15:50:38
> > To: users@jena.apache.org
> > Subject: Re: Jena Property Table
> >
> > The PropertyTable abstraction is for regular data where for each subject
> > there are the same properties, and the same number of each property for
> > every subject.
> >
> > Each row of the property table has a unique key which is the subject of
> > each triple. Being unique, it is assuming one subject - one row.
> >
> >
> > If your data is one subject, one RDFS.seeAlso, a property table could be
> > used. If your data is one subject, potentially many RDFS.seeAlso, then
> > PropertyTable isn't the right abstaction.
> >
> > GraphPropertyTable is the Graph implementation over PropertyTable.
> > Graphs are central to Jena.  A quick look at the code and I think it has
> > bugs (it ignores the subject in a Graph.find operation which looks wrong
> > to me).
> >
> > Personally, I'd consider not using that at all but instade doing your
> > own implementation of Graph by extending GraphBase ; you only have to
> > implement:
> >
> > performAdd( Triple t )
> > performDelete( Triple t )
> > graphBaseFind(Triple triplePattern)
> >
> > then check triple added match your restriction to RDFS.seeAlso
> > Choose the datastructure to store the (subject, object) pairs needed.
> > (A Multimap for example).
> >
> > And write test cases :-)
> >
> >   Hope that helps,
> >   Andy
> >
> > The name is taken from this earlier work:
> > http://www.hpl.hp.com/techreports/2006/HPL-2006-140.html
> > which is for a SQL-storage system no longer in Jena.
> >
> >
> > On 28/02/18 10:31, Samita Bai  / PhD CS Scholar @ City Campus wrote:
> >> Dear Andy,
> >>
> >>
> >> I am working on my Ph.D thesis and I have created a dataset of triples
> in which the property is same (i.e. RDFS.seeAlso), means I have only one
> property for all triples. I thought I will be creating a property table
> with one column and many rows. And I coded it too. But I am bit concerned
> because property table is now deprecated.
> >>
> >>
> >> Please suggest me any indexing or storing mechanism where we want to
> store only one property for many triples.
> >>
> >>
> >> Thanks & Regards,
> >>
> >> Samita Bai
> >>
> >> 
> >> From: Andy Seaborne 
> >> Sent: 28 February 2018 15:26:25
> >> To: users@jena.apache.org
> >> Subject: Re: Jena Property Table
> >>
> >> Hi Samita,
> >>
> >> The jena-csv module sees less attention than the major modules and is
> >> considered "legacy".  That said, it is not going away any time soon and
> >> the source code will always be available.
> >>
> >> Could you say something about your project and what are you trying to do
> >> with property tables?  Maybe there is a different approach somewhere.
> >>
> >>Andy
> >>
> >>
> >>
> >> On 28/02/18 09:36, Samita Bai  / PhD CS Scholar @ City Campus wrote:
> >>> Oh Ok, then please follow this link
> >>>
> >>>
> >>> https://github.com/apache/jena/tree/master/jena-csv/src/
> main/java/org/apache/jena/propertytable
> >>>
> >>> [https:/

Re: Use PREFIXes by default

2018-02-25 Thread Claude Warren
Awhile back I was looking for a way to change/add to the list of prefixes
at runtime.  This request dovetails nicely with what I was looking for.  I
got sidetracked with other issues at work before I could really get to it.

As I recall most storage layers don't keep a list of namespaces separate
from the nodes they are used with so there may not be a quick mechanism to
determine the namespaces at startup and the solution may have to resort to
querying for all the namespaces in the stored graph(s) first. So
determining which prefixes to use may be an issue.

However, once the prefixes have been determined (even if from the current
configuration file that is currently used) adding something like a custom
header to Fuseki processing to add the prefixes to a query should be fairly
simple.

The only issue will be how to do the overrides.  I'm not certain but
current processing may help here.  I expect that (but do not have time to
test) that later definitions override earlier definitions providing a query
with a prefix defined 2x and seeing what it does would tell you whether or
not it would be a simple as adding the default prefixes to the front of the
query input stream before it was processed normally.

The solution seems easy enough and unobtrusive enough to be implementable
and unobjectionable.

Personally, I can see that it would be handy to have the system add all the
prefixes it knows about on every query so that users always get back the
shortened (more familiar) versions of the URLs as they are much easier to
scan and understand.

Claude



On Mon, Feb 26, 2018 at 6:51 AM, Laura Morales  wrote:

> Because to me it seems like a very useful feature for Fuseki. For users
> it's simpler to think in terms of short properties instead of long urls, so
> they can submit queries right away without worrying of what is the exact
> url for a particular prefix. I'm not saying that this is a fundamental
> feature that Fuseki must have, but rather just an option that would be
> really useful. By the way, this is already happening with "a". In order to
> use this keyword, there is no need to use "PREFIX rdf: <
> http://www.w3.org/1999/02/22-rdf-syntax-ns#>" all the time. Which I think
> is really useful because I can remember "a" by heart but I would never be
> able to recall such a long prefix.
> Should maybe this be in the SPARQL specifications before Fuseki adds it,
> the same way "a" is? I could send an email to w3c for consideration.
>
>
>
>
> Sent: Sunday, February 25, 2018 at 10:46 PM
> From: ajs6f 
> To: users@jena.apache.org
> Subject: Re: Use PREFIXes by default
> If you are not concerned about performance, why not add those prefixes
> client-side?
>
>
> ajs6f
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Vocabulary for provenance

2018-02-25 Thread Claude Warren
I would avoid anonymous nodes as they are just to hard to work with when
trying to manage the data using SPARQL endpoints.  On the other hand if you
never produce a SPARQL endpoint this is not an issue.  On the third hand, I
always find project grow beyond what I first envisioned and SPARQL makes
application clustering easier.

Claude

On Thu, Feb 22, 2018 at 10:52 AM, Laura Morales  wrote:

> I used "provenance" as in "where does this piece of information come
> from", but it looks like it has a well defined meaning so perhaps I've used
> the wrong term.
> Looks like void is closer to my use case indeed, however prov-o seems to
> have some interesting properties as well.
> Thanks for the help.
>
>
>
>
> Sent: Thursday, February 22, 2018 at 11:39 AM
> From: "Martynas Jusevičius" 
> To: jena-users-ml 
> Subject: Re: Vocabulary for provenance
> Since you mention "provenance", PROV ontology would be one option:
> https://www.w3.org/TR/prov-o/
>
> But your usage looks more like VoID, more specifically void:inDataset:
> https://www.w3.org/TR/void/#backlinks[https://www.w3.org/
> TR/void/#backlinks]
>
> PROV and VoID can be combined of course.
>
> On Thu, Feb 22, 2018 at 11:35 AM, Laura Morales  wrote:
>
> > Which vocabulary is a good choice to describe provenance or to describe
> > graphs? I'd like to use something like this
> >
> >   "Graph-1"
> >   
> >
> > or like this
> >
> >   [
> >  "Graph-1" ;
> >  
> > ]
> >
> > there are so many vocabularies out there that I don't even know where to
> > start looking at. Is there anything available for this in schema.org?
> > Otherwise I'd like to know if there exist any vocabulary that is used by
> > others, and more importantly which is not abandoned or dead.
> >
> > Thanks.
> >
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Recursive SPARQL ( aka arbitrary length path ) : performance

2018-02-08 Thread Claude Warren
Wouldn't the union graph provide different answers in some cases?

for example

g2 contains

 ?sub rdfs:subClassOf  ex:foo

and g3 contains

ex:foo rdfs:subClassOf  .

the original query would not resolve

?sub *rdfs:subClassOf**  .

would it?

but the union graph would.

At least I think that is the case.


On Thu, Feb 8, 2018 at 8:39 AM, Jean-Marc Vanel 
wrote:

> Hi
>
> I wonder about performance of  arbitrary length path in Jena :
> https://www.w3.org/TR/2013/REC-sparql11-query-20130321/#
> propertypath-arbitrary-length
>
> For example , here is the query in semantic_forms for searching a string
> with a type class. I added yesterday the rdfs:subClassOf* pattern.
> I wonder if it would not be more efficient with the unionGraph instead of
> numerous GRAPH blocks .
>
> PREFIX text: 
> PREFIX rdfs: 
> PREFIX form: <
> http://raw.githubusercontent.com/jmvanel/semantic_forms/
> master/vocabulary/forms.owl.ttl#
> >
>
> SELECT DISTINCT ?thing ?COUNT WHERE {
>  ?thing text:query ( 'Jean*' ) .
>
>  graph ?g1 {
>?thing a ?sub .
>  }
>  graph ?g2 {
>?sub *rdfs:subClassOf**  .
>  } .
>  OPTIONAL {
>graph ?grCount {
> ?thing form:linksCount ?COUNT.
>   } }
> }
> ORDER BY DESC(?COUNT)
> LIMIT 10
>
>
> --
> Jean-Marc Vanel
> http://www.semantic-forms.cc:9111/display?displayuri=http:/
> /jmvanel.free.fr/jmv.rdf%23me#subject
>  /jmvanel.free.fr/jmv.rdf%23me>
> Déductions SARL - Consulting, services, training,
> Rule-based programming, Semantic Web
> +33 (0)6 89 16 29 52
> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Txn code not handling type of transaction

2017-12-27 Thread Claude Warren
While my code does not solve the problem of lock promotion it handles the
other cases.

Since there is no lock promotion there is no way to do what you want unless
you convert all read locks to write locks or build some sort of
wrapper/intercepter that will do some sort of release the lock and start
again if necessary.  Sounds fraught with problems.

Claude

On Wed, Dec 27, 2017 at 12:02 PM, George News  wrote:

> On 2017-12-27 12:52, dandh988 wrote:
> > LOL. Similar here, except I wrapped the Transactional with a
> > Reenterent lock because I needed to try the begin. A side effect
> > being I can promote the underlying Jena Transactional if I can
> > promote the Reenterent lock. With a few caveats... There's a volatile
> > write timestamp which I check after promoting to ensure I don't end
> > up reading a different write transaction.
>
> Do you have example code? Just to have a guide to migrate the idea.
>
> >
> > Dick  Original message From: Claude Warren
> >  Date: 27/12/2017  11:29  (GMT+00:00) To:
> > users@jena.apache.org Subject: Re: Txn code not handling type of
> > transaction I recently wrote some code to  try to handle a similar
> > situation.  In my case I knew I needed a transaction to be active at
> > various points so I created a TransactionHolder.  I create the holder
> > and passing the object that has implements Transactional as well as
> > the type of ReadWrite I want.
> >
> > If the transaction is active it does nothing (and I hope the proper
> > transaction has been started) otherwise It starts the transaction. Ad
> > the end I call commit or abort as appropriate.  If I did not start
> > the transaction the commit, abort or end is ignored.
> >
> > I think there may be an issue with abort in that it should
> > probablyset up end() to throw an exception when I have not created
> > the transaction  so that the outer transaction will fail.
> >
> > import org.apache.jena.query.ReadWrite; import
> > org.apache.jena.sparql.core.Transactional;
> >
> > public class TransactionHolder  { private final Transactional txn;
> > private final boolean started; private final ReadWrite rw;
> >
> > public TransactionHolder( Transactional txn, ReadWrite rw ) {
> > this.txn = txn; this.rw = rw; started = ! txn.isInTransaction(); if
> > (started) { txn.begin( rw ); } }
> >
> > public boolean ownsTranaction() { return started; }
> >
> > public void commit() { if (started) { txn.commit(); } }
> >
> > public void abort() { if (started) { txn.abort(); } }
> >
> > public void end() { if (started) { txn.end(); } }
> >
> > }
> >
> >
> > On Wed, Dec 27, 2017 at 11:03 AM, dandh988 
> > wrote:
> >
> >> You cannot nest transactions nor can you promote a read to a
> >> write. You need to rewrite your code or use txn which correctly
> >> checks if a transaction is available and if not will begin the
> >> correct one, either READ or WRITE.
> >>
> >>
> >> Dick  Original message From: George News
> >>  Date: 27/12/2017  10:27  (GMT+00:00) To: Jena
> >> User Mailing List < users@jena.apache.org> Subject: Txn code not
> >> handling type of transaction Hi,
> >>
> >> As you know from other threads I'm having some issues with
> >> transactions. Your suggestion is to use Txn instead of begin/end.
> >> Just for curiosity I have checked the Txn code at [1] and it seems
> >> that inside you use begin/end.
> >>
> >> However I have a doubt concerning how you handle the begin/end for
> >> READ and WRITE. It seems that you open a transaction based on
> >> txn.isInTransaction(), but how do you know if it is a READ or
> >> WRITE?
> >>
> >> If you create something like:
> >>
> >> Txn.executeRead(dataset, { Txn.executeWrite(dataset, { // Whatever
> >> } } }
> >>
> >> the txn.begin(ReadWrite.WRITE) is not called and therefore it might
> >> be leading to unexepected behaviours for the txn.commit().
> >>
> >> could you give some hints on how this is handle internally? Before
> >> fully modify the code I have, it might be easier to replicate the
> >> txn behaviour ;) but I would like to know the above (if possible).
> >>
> >> As always, thanks in advanced Jorge
> >>
> >> [1]: jena/jena-arq/src/main/java/org/apache/jena/system/Txn.java
> >>
> >
> >
> >
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Txn code not handling type of transaction

2017-12-27 Thread Claude Warren
I recently wrote some code to  try to handle a similar situation.  In my
case I knew I needed a transaction to be active at various points so I
created a TransactionHolder.  I create the holder and passing the object
that has implements Transactional as well as the type of ReadWrite I want.

If the transaction is active it does nothing (and I hope the proper
transaction has been started) otherwise It starts the transaction.
Ad the end I call commit or abort as appropriate.  If I did not start the
transaction the commit, abort or end is ignored.

I think there may be an issue with abort in that it should probablyset up
end() to throw an exception when I have not created the transaction  so
that the outer transaction will fail.

import org.apache.jena.query.ReadWrite;
import org.apache.jena.sparql.core.Transactional;

public class TransactionHolder  {
private final Transactional txn;
private final boolean started;
private final ReadWrite rw;

public TransactionHolder( Transactional txn, ReadWrite rw )
{
this.txn = txn;
this.rw = rw;
started = ! txn.isInTransaction();
if (started)
{
txn.begin( rw );
}
}

public boolean ownsTranaction() {
return started;
}

public void commit() {
if (started) {
txn.commit();
}
}

public void abort() {
if (started)
{
txn.abort();
}
}

   public void end() {
if (started) {
txn.end();
}
   }

}


On Wed, Dec 27, 2017 at 11:03 AM, dandh988  wrote:

> You cannot nest transactions nor can you promote a read to a write.
> You need to rewrite your code or use txn which correctly checks if a
> transaction is available and if not will begin the correct one, either READ
> or WRITE.
>
>
> Dick
>  Original message From: George News 
> Date: 27/12/2017  10:27  (GMT+00:00) To: Jena User Mailing List <
> users@jena.apache.org> Subject: Txn code not handling type of transaction
> Hi,
>
> As you know from other threads I'm having some issues with transactions.
> Your suggestion is to use Txn instead of begin/end. Just for curiosity I
> have checked the Txn code at [1] and it seems that inside you use
> begin/end.
>
> However I have a doubt concerning how you handle the begin/end for READ
> and WRITE. It seems that you open a transaction based on
> txn.isInTransaction(), but how do you know if it is a READ or WRITE?
>
> If you create something like:
>
> Txn.executeRead(dataset, {
> Txn.executeWrite(dataset, {
>// Whatever
> }
>   }
> }
>
> the txn.begin(ReadWrite.WRITE) is not called and therefore it might be
> leading to unexepected behaviours for the txn.commit().
>
> could you give some hints on how this is handle internally? Before fully
> modify the code I have, it might be easier to replicate the txn
> behaviour ;) but I would like to know the above (if possible).
>
> As always, thanks in advanced
> Jorge
>
> [1]: jena/jena-arq/src/main/java/org/apache/jena/system/Txn.java
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: docker fuseki

2017-12-19 Thread Claude Warren
We are currently using mem-backed sets in testing but I expect TDB.  We
just want to create it during startup if it does not exist.

Claude

On Tue, Dec 19, 2017 at 4:27 PM, ajs6f  wrote:

> Claude--
>
> Do you mean a TDB-backed dataset, or an in-mem dataset?
>
> For either with Docker, I would think you should be able to accompany the
> Dockerfile with an appropriate config directory.
>
> For Fuseki alone, a config file with assembler RDF would handle an in-mem
> dataset, but I'm not sure about a TDB dataset.
>
> ajs6f
>
> > On Dec 19, 2017, at 11:24 AM, Claude Warren  wrote:
> >
> > Is there a simple way to have the docker (or naked fuseki) create an
> empty
> > dataset on startup?
> >
> > We would like to create the dataset (preferably if it doesn't exist) when
> > we start fuseki in the docker container.
> >
> > Claude
> >
> > --
> > I like: Like Like - The likeliest place on the web
> > <http://like-like.xenei.com>
> > LinkedIn: http://www.linkedin.com/in/claudewarren
>
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


docker fuseki

2017-12-19 Thread Claude Warren
Is there a simple way to have the docker (or naked fuseki) create an empty
dataset on startup?

We would like to create the dataset (preferably if it doesn't exist) when
we start fuseki in the docker container.

Claude

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Does Jena support query over Cassandra?

2017-12-19 Thread Claude Warren
Claire,

Depends on what you mean by querying over Cassandra.  Jena supports a
pluggable storage layer.  To that end I have written a storage layer for
Jena on Cassandra (https://github.com/Claudenw/jena-on-cassandra) that is
functional but has not been tested at scale/load.

If you mean can it create triples from arbitrary data in Cassandra and
query that, I don't know of any tool to do that.

Claude

On Tue, Dec 19, 2017 at 10:25 AM, claire Qiu 
wrote:

> Hi,
>
> I would like to ask if Jena suppors querying over Cassandra?
>
> Thanks a lot!
>
> Best,
> Claire
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Finding differences between graphs (Was: Jena/Fuseki graph sync)

2017-12-08 Thread Claude Warren
I am not sure what digest you mean but the fuaeki graphs do have an
isomorphism check so you could compare compare construct results.

Claude

On 9 Dec 2017 04:11, "Dan Davis"  wrote:

> So, this is what I was asking about earlier.  With small graphs, e.g.
> DESCRIBE <>, the algorithms for graph isomorphism that support blank
> nodes should be good.   rdflib includes an implementation, and I wish I
> knew whether there is an implementation of that digest algorithm for Jena.
>
> On Fri, Dec 8, 2017 at 2:27 AM, Claude Warren  wrote:
>
> > On Fri, Nov 24, 2017 at 12:19 PM, Laura Morales 
> wrote:
> >
> > > > What about simply deleting the old graph and loading the triples of
> the
> > > > .nt file into the graph afterwards? I don't see any benefit of such a
> > > > "tool" - you could just write your own bash script for this if you
> need
> > > > this quite often.
> > >
> > > The advantage is with large graphs, such as wikidata. If I download
> their
> > > dumps once a week, it's much more efficient to only change a few
> triples
> > > instead of deleting the entire graph and recreating the whole TDB
> store.
> > >
> >
> >
> > Performing a diff between two graphs with blank nodes might be speed up
> > using bloom filters.
> >
> > I have code that represents triples as bloom filters and I know that 9
> byte
> > filters will work for very large graphs so you could probably get aways
> > with 8 bytes to make them fit in a standard integer size.
> >
> > This is a multiple pass operation.
> >
> > create a bloom filter for each node in graph A.  Call this list A
> >
> > step through  graph B creating bloom filters for each triple. if the
> triple
> > in question has blank nodes only encode non blank nodes
> >
> > If the bloom filter is not in List A it is new.
> >
> > if the bloom filter is in list A then it may be new and a direct lookup
> in
> > graph A. if it is not found add it
> >
> > If your filter list has a pointer to the triples that it represents
> > (remember there can be bloom filter collisions) then you can rapidly
> > determine if there is a match and you also have a good starting place to
> do
> > blank node comparisons to determine if the triples are equivalent.
> >
> > If anyone is interested in trying this I have some triple/bloom filter
> code
> > in my github repository.
> >
> > Claude
> >
> > --
> > I like: Like Like - The likeliest place on the web
> > <http://like-like.xenei.com>
> > LinkedIn: http://www.linkedin.com/in/claudewarren
> >
>


Re: Finding differences between graphs (Was: Jena/Fuseki graph sync)

2017-12-07 Thread Claude Warren
https://github.com/Claudenw/BloomFilter

https://github.com/Claudenw/BloomGraph

Both are just playground code.  I would be happy to discuss any issues with
you.

Claude

On 8 Dec 2017 08:43, "Laura Morales"  wrote:

> > If anyone is interested in trying this I have some triple/bloom filter
> code
> > in my github repository.
>
> Link?
>


Finding differences between graphs (Was: Jena/Fuseki graph sync)

2017-12-07 Thread Claude Warren
On Fri, Nov 24, 2017 at 12:19 PM, Laura Morales  wrote:

> > What about simply deleting the old graph and loading the triples of the
> > .nt file into the graph afterwards? I don't see any benefit of such a
> > "tool" - you could just write your own bash script for this if you need
> > this quite often.
>
> The advantage is with large graphs, such as wikidata. If I download their
> dumps once a week, it's much more efficient to only change a few triples
> instead of deleting the entire graph and recreating the whole TDB store.
>


Performing a diff between two graphs with blank nodes might be speed up
using bloom filters.

I have code that represents triples as bloom filters and I know that 9 byte
filters will work for very large graphs so you could probably get aways
with 8 bytes to make them fit in a standard integer size.

This is a multiple pass operation.

create a bloom filter for each node in graph A.  Call this list A

step through  graph B creating bloom filters for each triple. if the triple
in question has blank nodes only encode non blank nodes

If the bloom filter is not in List A it is new.

if the bloom filter is in list A then it may be new and a direct lookup in
graph A. if it is not found add it

If your filter list has a pointer to the triples that it represents
(remember there can be bloom filter collisions) then you can rapidly
determine if there is a match and you also have a good starting place to do
blank node comparisons to determine if the triples are equivalent.

If anyone is interested in trying this I have some triple/bloom filter code
in my github repository.

Claude

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Jena Initializer Error

2017-11-22 Thread Claude Warren
Neda,

What is the call stack from the exception?  it would be nice to know what
it says as that would point to the location of the error.

I suspect that you are not including all the necessary Jena libraries, but
can not be certain.

Claude

On Thu, Nov 23, 2017 at 1:25 AM, Neda Alipanah 
wrote:

> Hello there,
> I have a quick question. I am loading a 25 Meg Owl file to the memory
> using the following commands. My code is working fine through the
> IDE(IntelliJ), but when I create a runnable Jar, it does not find the file.
> I already put the owl file directory in the class path but I get Exception
> In Initializer error.
>
> // Create an empty model
> model = ModelFactory.createOntologyModel(OntModelSpec.OWL_DL_MEM);
>
> // Use the FileManager to find the input file
> InputStream in = FileManager.get().open(inPath);
>
> [image: Inline image 1]
>
> Really appreciate if you can provide a solution for the problem.
>
>
> Best Regards,
>
> Neda
>
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: problem with VALUES querybuilder

2017-11-21 Thread Claude Warren
I have added options to add values inside the graph patterns but have not
had time to write the test cases yet.

I expect to have the additional features added and tested tomorrow night.

I am also add a series of addGraph( graph, [ some sort of triple here ] )
methods to create simple graphs queries where a single triple is all that
is requested.  I found I wanted this while I was working on some other code
today. ;)

Claude

On Tue, Nov 21, 2017 at 2:52 PM, Andy Seaborne  wrote:

> Yes, there is a difference.
>
> It (the join) happens just before project and after any GROUP BY.
>
> See the algebra at http://www.sparql.org/query-validator.html
>
> Andy
>
>
> On 21/11/17 14:46, Claude Warren wrote:
>
>> based on https://www.w3.org/TR/sparql11-query/#inline-data-examples
>>
>> there is no difference between values  blocks inside or outside a graph
>> pattern.
>>
>> On Tue, Nov 21, 2017 at 2:35 PM, Claude Warren  wrote:
>>
>> Currently the values are always placed in the top level of the query.
>>>
>>> Q: does it make a difference to exeuction?  (I suspect it does but I want
>>> to make sure before I proceed to add a method to place it inside the
>>> graph
>>> pattern.
>>>
>>> Claude
>>>
>>> On Tue, Nov 21, 2017 at 1:20 PM, Rob Vesse  wrote:
>>>
>>> The output you get is syntactically valid - VALUES is allowed at the top
>>>> level of the query as well as within graph patterns
>>>>
>>>>   It is not clear to me if the latter this Is actually possible with the
>>>> current query builder, Claude can probably give you a more detailed
>>>> answer
>>>>
>>>> Rob
>>>>
>>>>
>>>> On 21/11/2017, 12:05, "Chris Dollin" 
>>>> wrote:
>>>>
>>>>  Dear All
>>>>
>>>>  I'm missing something with use of the query builder to create
>>>> VALUES
>>>>  clauses.
>>>>  The code
>>>>
>>>>  @Test public void buildValues() {
>>>>  SelectBuilder sb = new SelectBuilder();
>>>>  sb.addValueVar("item",  "spoo", "flarn");
>>>>  System.err.println(sb.buildString());
>>>>  }
>>>>
>>>>  generates
>>>>
>>>>SELECT  *
>>>>WHERE
>>>>  {  }
>>>>VALUES ?item { "spoo" "flarn" }
>>>>
>>>>  which I believe to be syntactically incorrect but in any case I
>>>> want
>>>> the
>>>>  generated VALUES clause to be inside the WHERE {} ie
>>>>
>>>>SELECT * WHERE {VALUES ?item {"spoo" "flarn"}}}
>>>>
>>>>  What should I be doing and how should I have known that?
>>>>
>>>>  Chris
>>>>
>>>>  PS please to excuse the misuse of @Test here ... exploratory use
>>>> only.
>>>>
>>>>  --
>>>>  "What I don't understand is this ..."   Trevor Chaplin, /The
>>>> Beiderbeck
>>>>  Affair/
>>>>
>>>>  Epimorphics Ltd, http://www.epimorphics.com
>>>>  Registered address: Court Lodge, 105 High Street, Portishead,
>>>> Bristol
>>>> BS20
>>>>  6PT
>>>>  Epimorphics Ltd. is a limited company registered in England (number
>>>> 7016688)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> --
>>> I like: Like Like - The likeliest place on the web
>>> <http://like-like.xenei.com>
>>> LinkedIn: http://www.linkedin.com/in/claudewarren
>>>
>>>
>>
>>
>>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: problem with VALUES querybuilder

2017-11-21 Thread Claude Warren
based on https://www.w3.org/TR/sparql11-query/#inline-data-examples

there is no difference between values  blocks inside or outside a graph
pattern.

On Tue, Nov 21, 2017 at 2:35 PM, Claude Warren  wrote:

> Currently the values are always placed in the top level of the query.
>
> Q: does it make a difference to exeuction?  (I suspect it does but I want
> to make sure before I proceed to add a method to place it inside the graph
> pattern.
>
> Claude
>
> On Tue, Nov 21, 2017 at 1:20 PM, Rob Vesse  wrote:
>
>> The output you get is syntactically valid - VALUES is allowed at the top
>> level of the query as well as within graph patterns
>>
>>  It is not clear to me if the latter this Is actually possible with the
>> current query builder, Claude can probably give you a more detailed answer
>>
>> Rob
>>
>>
>> On 21/11/2017, 12:05, "Chris Dollin" 
>> wrote:
>>
>> Dear All
>>
>> I'm missing something with use of the query builder to create VALUES
>> clauses.
>> The code
>>
>> @Test public void buildValues() {
>> SelectBuilder sb = new SelectBuilder();
>> sb.addValueVar("item",  "spoo", "flarn");
>> System.err.println(sb.buildString());
>> }
>>
>> generates
>>
>>   SELECT  *
>>   WHERE
>> {  }
>>   VALUES ?item { "spoo" "flarn" }
>>
>> which I believe to be syntactically incorrect but in any case I want
>> the
>> generated VALUES clause to be inside the WHERE {} ie
>>
>>   SELECT * WHERE {VALUES ?item {"spoo" "flarn"}}}
>>
>> What should I be doing and how should I have known that?
>>
>> Chris
>>
>> PS please to excuse the misuse of @Test here ... exploratory use only.
>>
>> --
>> "What I don't understand is this ..."   Trevor Chaplin, /The
>> Beiderbeck
>> Affair/
>>
>> Epimorphics Ltd, http://www.epimorphics.com
>> Registered address: Court Lodge, 105 High Street, Portishead, Bristol
>> BS20
>> 6PT
>> Epimorphics Ltd. is a limited company registered in England (number
>> 7016688)
>>
>>
>>
>>
>>
>>
>
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: problem with VALUES querybuilder

2017-11-21 Thread Claude Warren
Currently the values are always placed in the top level of the query.

Q: does it make a difference to exeuction?  (I suspect it does but I want
to make sure before I proceed to add a method to place it inside the graph
pattern.

Claude

On Tue, Nov 21, 2017 at 1:20 PM, Rob Vesse  wrote:

> The output you get is syntactically valid - VALUES is allowed at the top
> level of the query as well as within graph patterns
>
>  It is not clear to me if the latter this Is actually possible with the
> current query builder, Claude can probably give you a more detailed answer
>
> Rob
>
>
> On 21/11/2017, 12:05, "Chris Dollin"  wrote:
>
> Dear All
>
> I'm missing something with use of the query builder to create VALUES
> clauses.
> The code
>
> @Test public void buildValues() {
> SelectBuilder sb = new SelectBuilder();
> sb.addValueVar("item",  "spoo", "flarn");
> System.err.println(sb.buildString());
> }
>
> generates
>
>   SELECT  *
>   WHERE
> {  }
>   VALUES ?item { "spoo" "flarn" }
>
> which I believe to be syntactically incorrect but in any case I want
> the
> generated VALUES clause to be inside the WHERE {} ie
>
>   SELECT * WHERE {VALUES ?item {"spoo" "flarn"}}}
>
> What should I be doing and how should I have known that?
>
> Chris
>
> PS please to excuse the misuse of @Test here ... exploratory use only.
>
> --
> "What I don't understand is this ..."   Trevor Chaplin, /The Beiderbeck
> Affair/
>
> Epimorphics Ltd, http://www.epimorphics.com
> Registered address: Court Lodge, 105 High Street, Portishead, Bristol
> BS20
> 6PT
> Epimorphics Ltd. is a limited company registered in England (number
> 7016688)
>
>
>
>
>
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: sixteenth anniversary

2017-11-20 Thread Claude Warren
Or perhaps a song https://www.youtube.com/watch?v=yoOuTSBAWWA


On Mon, Nov 20, 2017 at 4:22 PM, Claude Warren  wrote:

> We need a cake :)
>
> On Mon, Nov 20, 2017 at 4:09 PM, Andy Seaborne  wrote:
>
>> Today, November 20th, 2017, is the sixteenth anniversary of the
>> registration of Jena as a SourceForge project.
>>
>> https://sourceforge.net/projects/jena/
>> 2001-11-20
>>
>> The entire code history up until the migration to git is available in
>> Apache SVN - from CVS at SF, SVN at SF and SVN at Apache.
>>
>> Andy
>>
>
>
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: sixteenth anniversary

2017-11-20 Thread Claude Warren
We need a cake :)

On Mon, Nov 20, 2017 at 4:09 PM, Andy Seaborne  wrote:

> Today, November 20th, 2017, is the sixteenth anniversary of the
> registration of Jena as a SourceForge project.
>
> https://sourceforge.net/projects/jena/
> 2001-11-20
>
> The entire code history up until the migration to git is available in
> Apache SVN - from CVS at SF, SVN at SF and SVN at Apache.
>
> Andy
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: problem with query builder

2017-11-18 Thread Claude Warren
Fix coming shortly but the work around is...

change:

Expr x = new NodeValueInteger(1)

to:

NodeValue nv = new NodeValueInteger(1)
nv.asNode()
Expr x = nv;

Claude

On Sat, Nov 18, 2017 at 3:48 PM, Claude Warren  wrote:

> Well done Andy.  Right on the money, exactly what I discovered.
>
> On Sat, Nov 18, 2017 at 3:39 PM, Andy Seaborne  wrote:
>
>> c.f. NodeValue.getNode and NodeValue.asNode
>>
>> Presumably something calls getNode when it wanted asNode.
>>
>>
>> On 18/11/17 15:36, Claude Warren wrote:
>>
>>> This is a bug.  The problem is that new NodeValueInteger(1);
>>> does not set the Node value that the rewriter is trying to rewrite and
>>> thus
>>> the null pointer exception.
>>>
>>> I will get a fix out shortly.
>>>
>>> Claude
>>>
>>> On Fri, Nov 17, 2017 at 4:14 PM, Chris Dollin <
>>> chris.dol...@epimorphics.com>
>>> wrote:
>>>
>>> Hi,
>>>>
>>>> I was experimenting with the query builder and hit a problem
>>>> when I was attempting to construct an Expr. This example:
>>>>
>>>> package com.epimorphics.scratch;
>>>>
>>>> import org.apache.jena.arq.querybuilder.ConstructBuilder;
>>>> import org.apache.jena.sparql.expr.E_GreaterThan;
>>>> import org.apache.jena.sparql.expr.Expr;
>>>> import org.apache.jena.sparql.expr.ExprVar;
>>>> import org.apache.jena.sparql.expr.nodevalue.NodeValueInteger;
>>>> import org.junit.Test;
>>>>
>>>> public class Example {
>>>>
>>>>  @Test public void tryBuilding() {
>>>>  ConstructBuilder cb = new ConstructBuilder();
>>>>
>>>>  Expr x = new NodeValueInteger(1);
>>>>  Expr y = new ExprVar("y");
>>>>  Expr e = new E_GreaterThan(x, y);
>>>>  cb.addFilter(e);
>>>>  System.err.println(cb.buildString());
>>>>  }
>>>> }
>>>>
>>>> when run fails with a null pointer exception:
>>>>
>>>> java.lang.NullPointerException
>>>>  at org.apache.jena.arq.querybuilder.rewriters.
>>>> AbstractRewriter.changeNode(AbstractRewriter.java:126)
>>>>  at org.apache.jena.arq.querybuilder.rewriters.NodeValueRewriter
>>>> .visit(
>>>> NodeValueRewriter.java:64)
>>>>  at org.apache.jena.sparql.expr.nodevalue.NodeValueInteger.
>>>> visit(NodeValueInteger.java:78)
>>>>  at org.apache.jena.arq.querybuilder.rewriters.ExprRewriter.visit(
>>>> ExprRewriter.java:127)
>>>>  at org.apache.jena.sparql.expr.NodeValue.visit(NodeValue.java:1
>>>> 205)
>>>>  at org.apache.jena.arq.querybuilder.rewriters.ExprRewriter.visit(
>>>> ExprRewriter.java:65)
>>>>  at org.apache.jena.sparql.expr.ExprFunction2.visit(
>>>> ExprFunction2.java:109)
>>>>  at org.apache.jena.arq.querybuilder.rewriters.ElementRewriter.v
>>>> isit(
>>>> ElementRewriter.java:70)
>>>>  at org.apache.jena.sparql.syntax.ElementFilter.visit(
>>>> ElementFilter.java:35)
>>>>  at org.apache.jena.arq.querybuilder.rewriters.ElementRewriter.v
>>>> isit(
>>>> ElementRewriter.java:141)
>>>>  at org.apache.jena.sparql.syntax.ElementGroup.visit(
>>>> ElementGroup.java:120)
>>>>  at org.apache.jena.arq.querybuilder.handlers.WhereHandler.addAll(
>>>> WhereHandler.java:81)
>>>>  at org.apache.jena.arq.querybuilder.handlers.HandlerBlock.addAll(
>>>> HandlerBlock.java:218)
>>>>  at org.apache.jena.arq.querybuilder.handlers.HandlerBlock.addAll(
>>>> HandlerBlock.java:245)
>>>>  at org.apache.jena.arq.querybuilder.AbstractQueryBuilder.build(
>>>> AbstractQueryBuilder.java:555)
>>>>  at org.apache.jena.arq.querybuilder.AbstractQueryBuilder.buildS
>>>> tring(
>>>> AbstractQueryBuilder.java:522)
>>>>  at com.epimorphics.scratch.Example.tryBuilding(Example.java:20)
>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>>  at sun.reflect.NativeMethodAccessorImpl.invoke(
>>>> NativeMethodAccessorImpl.java:62)
>>>>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>>> DelegatingMethodAccessorImpl.java:43)
>>>>  at java.lang.reflect.Method.invoke(Method.java:498)
>>>

Re: problem with query builder

2017-11-18 Thread Claude Warren
Well done Andy.  Right on the money, exactly what I discovered.

On Sat, Nov 18, 2017 at 3:39 PM, Andy Seaborne  wrote:

> c.f. NodeValue.getNode and NodeValue.asNode
>
> Presumably something calls getNode when it wanted asNode.
>
>
> On 18/11/17 15:36, Claude Warren wrote:
>
>> This is a bug.  The problem is that new NodeValueInteger(1);
>> does not set the Node value that the rewriter is trying to rewrite and
>> thus
>> the null pointer exception.
>>
>> I will get a fix out shortly.
>>
>> Claude
>>
>> On Fri, Nov 17, 2017 at 4:14 PM, Chris Dollin <
>> chris.dol...@epimorphics.com>
>> wrote:
>>
>> Hi,
>>>
>>> I was experimenting with the query builder and hit a problem
>>> when I was attempting to construct an Expr. This example:
>>>
>>> package com.epimorphics.scratch;
>>>
>>> import org.apache.jena.arq.querybuilder.ConstructBuilder;
>>> import org.apache.jena.sparql.expr.E_GreaterThan;
>>> import org.apache.jena.sparql.expr.Expr;
>>> import org.apache.jena.sparql.expr.ExprVar;
>>> import org.apache.jena.sparql.expr.nodevalue.NodeValueInteger;
>>> import org.junit.Test;
>>>
>>> public class Example {
>>>
>>>  @Test public void tryBuilding() {
>>>  ConstructBuilder cb = new ConstructBuilder();
>>>
>>>  Expr x = new NodeValueInteger(1);
>>>  Expr y = new ExprVar("y");
>>>  Expr e = new E_GreaterThan(x, y);
>>>  cb.addFilter(e);
>>>  System.err.println(cb.buildString());
>>>  }
>>> }
>>>
>>> when run fails with a null pointer exception:
>>>
>>> java.lang.NullPointerException
>>>  at org.apache.jena.arq.querybuilder.rewriters.
>>> AbstractRewriter.changeNode(AbstractRewriter.java:126)
>>>  at org.apache.jena.arq.querybuilder.rewriters.NodeValueRewriter
>>> .visit(
>>> NodeValueRewriter.java:64)
>>>  at org.apache.jena.sparql.expr.nodevalue.NodeValueInteger.
>>> visit(NodeValueInteger.java:78)
>>>  at org.apache.jena.arq.querybuilder.rewriters.ExprRewriter.visit(
>>> ExprRewriter.java:127)
>>>  at org.apache.jena.sparql.expr.NodeValue.visit(NodeValue.java:1205)
>>>  at org.apache.jena.arq.querybuilder.rewriters.ExprRewriter.visit(
>>> ExprRewriter.java:65)
>>>  at org.apache.jena.sparql.expr.ExprFunction2.visit(
>>> ExprFunction2.java:109)
>>>  at org.apache.jena.arq.querybuilder.rewriters.ElementRewriter.
>>> visit(
>>> ElementRewriter.java:70)
>>>  at org.apache.jena.sparql.syntax.ElementFilter.visit(
>>> ElementFilter.java:35)
>>>  at org.apache.jena.arq.querybuilder.rewriters.ElementRewriter.
>>> visit(
>>> ElementRewriter.java:141)
>>>  at org.apache.jena.sparql.syntax.ElementGroup.visit(
>>> ElementGroup.java:120)
>>>  at org.apache.jena.arq.querybuilder.handlers.WhereHandler.addAll(
>>> WhereHandler.java:81)
>>>  at org.apache.jena.arq.querybuilder.handlers.HandlerBlock.addAll(
>>> HandlerBlock.java:218)
>>>  at org.apache.jena.arq.querybuilder.handlers.HandlerBlock.addAll(
>>> HandlerBlock.java:245)
>>>  at org.apache.jena.arq.querybuilder.AbstractQueryBuilder.build(
>>> AbstractQueryBuilder.java:555)
>>>  at org.apache.jena.arq.querybuilder.AbstractQueryBuilder.buildS
>>> tring(
>>> AbstractQueryBuilder.java:522)
>>>  at com.epimorphics.scratch.Example.tryBuilding(Example.java:20)
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke(
>>> NativeMethodAccessorImpl.java:62)
>>>  at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>>> DelegatingMethodAccessorImpl.java:43)
>>>  at java.lang.reflect.Method.invoke(Method.java:498)
>>>  at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>>> FrameworkMethod.java:47)
>>>  at org.junit.internal.runners.model.ReflectiveCallable.run(
>>> ReflectiveCallable.java:12)
>>>  at org.junit.runners.model.FrameworkMethod.invokeExplosively(
>>> FrameworkMethod.java:44)
>>>  at org.junit.internal.runners.statements.InvokeMethod.
>>> evaluate(InvokeMethod.java:17)
>>>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>>>  at org.junit.runners.BlockJUnit4ClassRunner.runChild(
&g

Re: problem with query builder

2017-11-18 Thread Claude Warren
This is a bug.  The problem is that new NodeValueInteger(1);
does not set the Node value that the rewriter is trying to rewrite and thus
the null pointer exception.

I will get a fix out shortly.

Claude

On Fri, Nov 17, 2017 at 4:14 PM, Chris Dollin 
wrote:

> Hi,
>
> I was experimenting with the query builder and hit a problem
> when I was attempting to construct an Expr. This example:
>
> package com.epimorphics.scratch;
>
> import org.apache.jena.arq.querybuilder.ConstructBuilder;
> import org.apache.jena.sparql.expr.E_GreaterThan;
> import org.apache.jena.sparql.expr.Expr;
> import org.apache.jena.sparql.expr.ExprVar;
> import org.apache.jena.sparql.expr.nodevalue.NodeValueInteger;
> import org.junit.Test;
>
> public class Example {
>
> @Test public void tryBuilding() {
> ConstructBuilder cb = new ConstructBuilder();
>
> Expr x = new NodeValueInteger(1);
> Expr y = new ExprVar("y");
> Expr e = new E_GreaterThan(x, y);
> cb.addFilter(e);
> System.err.println(cb.buildString());
> }
> }
>
> when run fails with a null pointer exception:
>
> java.lang.NullPointerException
> at org.apache.jena.arq.querybuilder.rewriters.
> AbstractRewriter.changeNode(AbstractRewriter.java:126)
> at org.apache.jena.arq.querybuilder.rewriters.NodeValueRewriter.visit(
> NodeValueRewriter.java:64)
> at org.apache.jena.sparql.expr.nodevalue.NodeValueInteger.
> visit(NodeValueInteger.java:78)
> at org.apache.jena.arq.querybuilder.rewriters.ExprRewriter.visit(
> ExprRewriter.java:127)
> at org.apache.jena.sparql.expr.NodeValue.visit(NodeValue.java:1205)
> at org.apache.jena.arq.querybuilder.rewriters.ExprRewriter.visit(
> ExprRewriter.java:65)
> at org.apache.jena.sparql.expr.ExprFunction2.visit(
> ExprFunction2.java:109)
> at org.apache.jena.arq.querybuilder.rewriters.ElementRewriter.visit(
> ElementRewriter.java:70)
> at org.apache.jena.sparql.syntax.ElementFilter.visit(
> ElementFilter.java:35)
> at org.apache.jena.arq.querybuilder.rewriters.ElementRewriter.visit(
> ElementRewriter.java:141)
> at org.apache.jena.sparql.syntax.ElementGroup.visit(
> ElementGroup.java:120)
> at org.apache.jena.arq.querybuilder.handlers.WhereHandler.addAll(
> WhereHandler.java:81)
> at org.apache.jena.arq.querybuilder.handlers.HandlerBlock.addAll(
> HandlerBlock.java:218)
> at org.apache.jena.arq.querybuilder.handlers.HandlerBlock.addAll(
> HandlerBlock.java:245)
> at org.apache.jena.arq.querybuilder.AbstractQueryBuilder.build(
> AbstractQueryBuilder.java:555)
> at org.apache.jena.arq.querybuilder.AbstractQueryBuilder.buildString(
> AbstractQueryBuilder.java:522)
> at com.epimorphics.scratch.Example.tryBuilding(Example.java:20)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
> FrameworkMethod.java:47)
> at org.junit.internal.runners.model.ReflectiveCallable.run(
> ReflectiveCallable.java:12)
> at org.junit.runners.model.FrameworkMethod.invokeExplosively(
> FrameworkMethod.java:44)
> at org.junit.internal.runners.statements.InvokeMethod.
> evaluate(InvokeMethod.java:17)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(
> BlockJUnit4ClassRunner.java:70)
> at org.junit.runners.BlockJUnit4ClassRunner.runChild(
> BlockJUnit4ClassRunner.java:50)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:160)
> at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(
> JUnit4IdeaTestRunner.java:68)
> at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.
> startRunnerWithArgs(IdeaTestRunner.java:47)
> at com.intellij.rt.execution.junit.JUnitStarter.
> prepareStreamsAndStart(
> JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(
> JUnitStarter.java:70)
>
> Replacing the addFilter line with
>
> cb.addFilter("1 > ?y");
>
> does not fail, terminating with the expected print-out
> of a construct query
>
>CONSTRUCT
> {
> }
>   WHERE
> { FILTER ( 1 > ?y )}
>
> I tried poking around with the debugger but couldn't see anyhting
> obvious. However, /sometimes/ stepping through the code
> results in it working and sometimes no

Re: How to derive Change Statements

2017-10-30 Thread Claude Warren
Just to clear up any misunderstanding, the permissions layers in not tied
to Fuseki, I was thinking Fuseki in my answer as you wanted to know who was
making changes and Fuseki would handle that part.  If you have another
mechanism to track who is making the change the permissions layer will work.

Claude

On Mon, Oct 30, 2017 at 9:52 AM, anuj kumar  wrote:

> Hey Claude,
>  I am not using Fuseki and thus the solution you propose will not be a
> feasible one for me.
>
> Andy,
>  Thanks for the information on GraphListener, DatasetChanges as well as
> rdf-patch. I think using these tools I will e able to handle my use cases.
> Let me give them a try and see if I stumble upon some rabbit hole.
>
> Thanks,
> Anuj Kumar
>
> On Fri, Oct 27, 2017 at 2:39 PM, Claude Warren  wrote:
>
> > Since you need to detect who changed what the only way I can see to do
> this
> > is turn on authentication on Fuseki and track changes made through it.
> >
> > You could bastardise the permissions layer[1] to do what you want.  The
> > permissions layer will let you filter down to the actions on the triples,
> > rather than implementing a SecurityEvaluator to perform the restriction
> you
> > could implement it record all changes (including who made them) in any
> > storage and format you wish.
> >
> > 1. https://jena.apache.org/documentation/permissions/index.html
> >
> >
> > On Fri, Oct 27, 2017 at 11:42 AM, anuj kumar 
> > wrote:
> >
> > > Hi Jena Users,
> > >  I have a query regarding the most effective way to capture changes in
> > the
> > > underlying Triple Store.
> > > I have a requirement where:
> > > 1. Every time a property of a Node (represented as a Triple Statement)
> > > changes, I also need to generate certain change statements to capture
> > what
> > > has changed, who changed it, when it was changed etc.
> > > 2. If I delete a Node (represented as a Set of Triples in the RDF
> > Store), I
> > > need to capture the action DELETE on this node, who deleted the node,
> > when
> > > it was deleted etc.
> > >
> > > Basically, I need to have a audit trail developed so that I  can create
> > the
> > > graph as it was at a given moment in time.
> > >
> > > The question is:
> > > 1. What is the best way to implement such functionality? Does Jena
> > support
> > > such a thing either natively or through some standard mechanism?
> > >
> > > Thanks,
> > > --
> > > *Anuj Kumar*
> > >
> >
> >
> >
> > --
> > I like: Like Like - The likeliest place on the web
> > <http://like-like.xenei.com>
> > LinkedIn: http://www.linkedin.com/in/claudewarren
> >
>
>
>
> --
> *Anuj Kumar*
>



-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


Re: How to derive Change Statements

2017-10-27 Thread Claude Warren
Since you need to detect who changed what the only way I can see to do this
is turn on authentication on Fuseki and track changes made through it.

You could bastardise the permissions layer[1] to do what you want.  The
permissions layer will let you filter down to the actions on the triples,
rather than implementing a SecurityEvaluator to perform the restriction you
could implement it record all changes (including who made them) in any
storage and format you wish.

1. https://jena.apache.org/documentation/permissions/index.html


On Fri, Oct 27, 2017 at 11:42 AM, anuj kumar 
wrote:

> Hi Jena Users,
>  I have a query regarding the most effective way to capture changes in the
> underlying Triple Store.
> I have a requirement where:
> 1. Every time a property of a Node (represented as a Triple Statement)
> changes, I also need to generate certain change statements to capture what
> has changed, who changed it, when it was changed etc.
> 2. If I delete a Node (represented as a Set of Triples in the RDF Store), I
> need to capture the action DELETE on this node, who deleted the node, when
> it was deleted etc.
>
> Basically, I need to have a audit trail developed so that I  can create the
> graph as it was at a given moment in time.
>
> The question is:
> 1. What is the best way to implement such functionality? Does Jena support
> such a thing either natively or through some standard mechanism?
>
> Thanks,
> --
> *Anuj Kumar*
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Binding graph names for WITH and USING

2017-10-25 Thread Claude Warren
Another template solution is the query builder where you can bind the ?g
just before query.  Though I am not certain that you can specify a var for
the graph name when building the query.  If you can then using the setVar()
method to replace it just before execution.

Claude

On Wed, Oct 25, 2017 at 10:35 PM, Andy Seaborne  wrote:

> USING is the like  SELECT-FROM -- the WG decided a different name was
> better. It sets up the dataset to query in the WHERE clause.  Again, there
> is no variable binding to use.
>
> It looks to me like you really want $resourceMetadataGraph to come from
> outside - a parametrized update.  If so, the unsubtle way is to do string
> replacement.  I don't know if ParameterizedQueryString will work.
>
> Don't forget
>
> DELETE WHERE  QuadPattern
>
> saves writing template and pattern
>
> Andy
>
>
> On 25/10/17 21:04, Charles Greer wrote:
>
>>
>> Hi Andy,
>>
>>
>> I was trying to just recreate the scenario -- the original query did have
>> a WHERE.
>>
>> I found it:
>>
>>
>> prefix raf: 
>> with $resourceMetadataGraph
>> delete {?urn raf:latest ?latest.}
>> where {?urn raf:latest ?latest.}
>>
>>
>> It was my attempt to present a workaround that ended up with a USING -
>> from SPARQL 1.1
>>
>>
>> """
>>
>> To illustrate the use of the WITH clause, an operation of the general
>> form:
>>
>> WITH  DELETE { a b c } INSERT { x y z } WHERE { ... }
>>
>> is considered equivalent to:
>>
>> DELETE { GRAPH  { a b c } } INSERT { GRAPH  { x y z } } USING
>>  WHERE { ... }
>>
>> """
>>
>>
>> We have taken on binding after USING and WITH as an enhancement request,
>> but given your response I'm going to push back
>>
>> until SPARQL 2.1
>>
>>
>> Thank you, Charles
>>
>>
>> 
>> From: Andy Seaborne 
>> Sent: Wednesday, October 25, 2017 11:10:12 AM
>> To: users@jena.apache.org
>> Subject: Re: Binding graph names for WITH and USING
>>
>> What are they trying to achieve?
>>
>> The Update has a second error the parsers didn't get to.
>>
>> No WHERE clause - it's mandatory
>>
>> Did they mean:
>>
>> INSERT {
>>  GRAPH ?g {   "1" }
>> }
>> WHERE
>>{ GRAPH ?g { } }
>>
>> GRAPH ?g { } returns all graph names.
>>
>> Where does USING come into this?
>>
>>   > Sometimes "the spec prohibits it" is not what people want to hear.
>>
>> That's what the commercial support has to answer!
>>
>>   Andy
>>
>>
>> On 25/10/17 19:02, Charles Greer wrote:
>>
>>> The error from Jena when I try to reproduce is fairly clear:
>>>
>>>
>>> @Test
>>> public void testGraphBinding() {
>>>   MarkLogicDatasetGraph dsg = getMarkLogicDatasetGraph();
>>>   String updateQuery = "WITH ?g INSERT {  <
>>> http://example.org/p> \"1\" }";
>>>   BindingMap updateBindings = new BindingHashMap();
>>>   updateBindings.add(Var.alloc("g"),
>>>   NodeFactory.createURI("http://example.org/g";));
>>>   UpdateRequest update = new UpdateRequest();
>>>   update.add(updateQuery);
>>>   UpdateAction.execute(update, dsg, updateBindings);
>>> }
>>>
>>>
>>>
>>> org.apache.jena.query.QueryParseException: Encountered "  "?g ""
>>> at line 1, column 6.
>>> Was expecting one of:
>>>...
>>>...
>>>...
>>>
>>>
>>>
>>> So I guess to reframe my question -- given that the expectation of this
>>> customer is that ?g should be bindable here,
>>>
>>> can I give them a rationale?  Sometimes "the spec prohibits it" is not
>>> what people want to hear.
>>>
>>>
>>>
>>>
>>> Charles Greer
>>> Lead Engineer
>>> MarkLogic Corporation
>>> cgr...@marklogic.com
>>> @grechaw
>>> www.marklogic.com
>>>
>> Best Database for Integrating Data From Silos | MarkLogic<
>> http://www.marklogic.com/>
>> www.marklogic.com
>> Learn why MarkLogic Enterprise NoSQL is the world's best database for
>> integrating data from silos.
>>
>>
>>
>>> 
>>> From: james anderson 
>>> Sent: Wednesday, October 25, 2017 9:59:14 AM
>>> To: users@jena.apache.org
>>> Subject: Re: Binding graph names for WITH and USING
>>>
>>> good evening;
>>>
>>>
>>> On 2017-10-25, at 18:48, Charles Greer 
 wrote:

 Hi jena folks, I was surprised recently with a customer who was
 surprised that my jena connector did not properly

 bind graph names as variables after WITH.


 WITH ?g

 INSERT ...

 DELETE ...


 When I looked back at the SPARQL specs, it looks indeed true that
 variables are inadmissable after WITH or USING.

 I am curious about how to write a workaround, short of putting a
 literal in for ?g

>>>
>>> do you have a concrete example which you are in a position to share?
>>>
>>> where did the customer express a binding for the ?g of which they
>>> thought the ‘with’ clause was in its scope?
>>>
>>> best regards, from berlin,
>>>
>>>
>>>
>>


-- 
I like: Like Like - The likeliest

Re: Multiunion doubt

2017-09-08 Thread Claude Warren
try createing a multiunion graph and hen create model on that.

The last time I looked the multiunion graph did not copy the data but
rather made calls across the graph implementations.

Claude

On Fri, Sep 8, 2017 at 11:17 AM, George News  wrote:

> Ups I have just noticed that MultiUnion is working with Graphs and not
> models :(
>
> Then, I'm using dataset.getNamedMode(name).getGraph(). But all my
> functions expect a Model as an input. How should I proceed?
>
> 1) ModelFactory.createUnion() in a kind of loop, adding one model, and
> then another, etc.
>
> 2) ModelFactory.createModelForGraph(Graph)
>
> I think I like a bit more the second, as this way if there is only one
> graph there shouldn't be any issue.
>
> Any tips?
>
> Thanks.
>
>
> On 2017-09-08 11:11, George News wrote:
> > Hi,
> >
> > Is multiunion only linking the graphs or copying them in a new graph?
> >
> > I'm thinking on splitting a huge graph on many simple ones and them
> > making the union depending on the part of the main graph that has to be
> > requested, therefore limiting the scope of the sparql and speeding
> > things up. But if multiunion copies, then I don't think I will speed
> things.
> >
> > Thanks
> >
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Jena Bean

2017-08-11 Thread Claude Warren
I have not used JavaBean as I am the author of PA4RDF but I believe that
both do similar things that is map POJOs to RDF data stores, so yes bridge
between java code and RDF.

Claude

On Fri, Aug 11, 2017 at 8:44 PM, javed khan  wrote:

> What is JenaBean advantages? Is it simplify our code and bring the bridge
> between our Java code and rdf Api?
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: RDFConnection create dataset?

2017-07-11 Thread Claude Warren
654)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
[2017-07-11 16:31:09] Admin  INFO  [57] 500 Server Error (2 ms)

{noformat}

Any idea what I am doing wrong?

Claude

On Tue, Jul 11, 2017 at 4:20 PM,  wrote:

> Claude, is this what you are looking for?
>
> https://jena.apache.org/documentation/fuseki2/fuseki-server-
> protocol.html#adding-a-dataset-and-its-services
>
> Basically you send a POST with some simple parameters.
>
> ajs6f
>
> Claude Warren wrote on 7/11/17 11:16 AM:
>
>> Is there an easy way to
>>
>>
>>1. determine what data sets exist on a connection
>>2. create a new dataset
>>
>> Failing that, is there an example of a REST call to create a new dataset
>> on
>> a Fuseki server?
>>
>>
>> Claude
>>
>>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


RDFConnection create dataset?

2017-07-11 Thread Claude Warren
Is there an easy way to


   1. determine what data sets exist on a connection
   2. create a new dataset

Failing that, is there an example of a REST call to create a new dataset on
a Fuseki server?


Claude

-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: SPARQL update question

2017-06-03 Thread Claude Warren
ahhh, I missed the with where difference. DOH!

On Sat, Jun 3, 2017 at 3:59 PM, Andy Seaborne  wrote:

> (You could try in at the command line and see!)
>
> Yes, they are different.
>
> On 03/06/17 15:51, Claude Warren wrote:
>
>> Is there a difference between
>>
>> DELETE {
>>GRAPH <http://example/addresses> {
>>  ?person foaf:givenname "Bill" .
>>}
>> }
>> INSERT {
>>GRAPH <http://example/addresses> {
>>  ?person foaf:givenname "William" .
>>}
>> }
>>
>
> WITH applies to the pattern as well.
>
> sparql11-update -- Section 3.1.3
>
> WHERE
>>
>{
>   GRAPH <http://example/addresses>
>
>{ ?person  foaf:givenname  "Bill" }
>>
>}
>
>>
>>
>> and
>>
>> WITH <http://example/addresses>
>> DELETE { ?person foaf:givenName 'Bill' }
>> INSERT { ?person foaf:givenName 'William' }
>> WHERE
>>{ ?person foaf:givenName 'Bill'
>>}
>>
>> I believe they are the logically the same but I want to make sure I am not
>> missing something.
>>
>
> What lead you to believe that?
>
> Andy
>
>
>> Claude
>>
>>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


SPARQL update question

2017-06-03 Thread Claude Warren
Is there a difference between

DELETE {
  GRAPH  {
?person foaf:givenname "Bill" .
  }
}
INSERT {
  GRAPH  {
?person foaf:givenname "William" .
  }
}
WHERE
  { ?person  foaf:givenname  "Bill" }


and

WITH 
DELETE { ?person foaf:givenName 'Bill' }
INSERT { ?person foaf:givenName 'William' }
WHERE
  { ?person foaf:givenName 'Bill'
  }

I believe they are the logically the same but I want to make sure I am not
missing something.

Claude
-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Question - Urgent !

2017-05-25 Thread Claude Warren
To be fair saying "I want a widget" is not a very clear requirement.

However,  If you create a javascript backed HTML document with a search bar
then when the user hits enter or clicks the search button take the contents
of the search bar and pass it to the fuseki query interface you will get
results back.

This assumes that the user actually enters valid SPARQL.

the query interface is by default at /query as documented at
https://jena.apache.org/documentation/serving_data/index.html#server-uri-scheme

Claude

On Fri, May 26, 2017 at 6:46 AM, S.ABINAYA S.ABINAYA  wrote:

> I was pretty clear with what I wanted
>
> On Fri, May 26, 2017 at 11:12 AM, Laura Morales  wrote:
>
> > > That is too broad ! Is there any specific guide ?
> >
> > Ask broad questions, get broad answers.
> >
>



-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: Javadoc download for Jena

2017-05-13 Thread Claude Warren
The javadoc are released as part of the distribution.  Most IDEs that
support Maven usage have a mechanism to download the javadoc as well.  In
this case the javadoc will appear as if by magic.  Otherwise the javadocs
for the specific jars can be downloaded from
http://central.maven.org/maven2/org/apache/jena/



On Sat, May 13, 2017 at 10:30 AM, David Moss  wrote:

> I see there is great Javadoc for Jena via web browsing, but how can I
> download the Javadoc so I can include it in an IDE?
>
>
> DM
>
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: How to make complex SPARQL queries reusable?

2017-04-24 Thread Claude Warren
You could use federated queries to return the sub query -- this is probably
not efficient but might provide a starting point for futher investigation.

If you are doing this in code you could use the QueryBuilder (
https://jena.apache.org/documentation/extras/querybuilder/) and pass the
sub query to the outer query.

Claude

On Mon, Apr 24, 2017 at 1:02 AM, Simon Schäfer  wrote:

> Hello,
>
> I have complex SPARQL queries, which I would like to divide into several
> parts (like function definitions), in order to reuse these parts among
> different SPARQL queries and in order to make a single SPARQL query easier
> to understand. What are my options to achieve this?
>
> I had a look at Jenas built in functionality to support user defined
> function definitions. The problem with them is that it seems that they can
> be used only for simple functionality like calculating the max of two
> integers. But I have quite complex functionality, which I don't want to
> rewrite in Java. Example:
>
> 
> 
> select * where {
>   ?s a ?tpe .
>   filter not exists {
> ?sub rdfs:subClassOf ?tpe .
> filter (?sub != ?tpe)
>   }
> }
> 
> 
>
> It would be great if that could be separated into:
>
> 
> 
> public class MyFunc extends FunctionBase1 {
> public NodeValue exec(NodeValue v) {
> return NodeValue.fromSparql("filter not exists {"
> + "   ?sub rdfs:subClassOf ?tpe ."
> + "  filter (?sub != ?tpe)"
> + "}") ;
> }
> }
> // and then later
> FunctionRegistry.get().put("http://example.org/function#myFunc";,
> MyFunc.class) ;
> 
> 
>
> and then:
>
> 
> 
> prefix fun:
> select * where {
>   ?s a ?tpe .
>   filter(fun:myFunc(?tpe))
> }
> 
> 
>
> Basically I'm looking for a way to call a SPARQL query from within a
> SPARQL query. Is that possible?
>
>
>


-- 
I like: Like Like - The likeliest place on the web

LinkedIn: http://www.linkedin.com/in/claudewarren


Re: construct with jena jdbc driver

2017-04-19 Thread Claude Warren
Thanks for the help.  I ended up converting to RDFConnection for connection
handling.

Claude

On Tue, Apr 18, 2017 at 5:03 PM, Rob Vesse  wrote:

> You can set the compatibility level on the connection which will  try to
> sniff the results and set an appropriate column type, however if the
> results are very mixed the sniffing can/will be inaccurate.
>
> http://jena.apache.org/documentation/jdbc/drivers.
> html#jdbc-compatibility-level
>
> You can also access the utility methods that do this sniffing on a
> specific column value to detect the equivalent JDBC type:
>
> http://jena.apache.org/documentation/javadoc/jdbc/org/apache/jena/jdbc/
> JdbcCompatibility.html#detectColumnType-java.lang.
> String-org.apache.jena.graph.Node-boolean-
>
> Hope this helps,
>
> Rob
>
> On 18/04/2017 16:53, "Claude Warren"  wrote:
>
> Quick question:
>
> I have a construct query that returns various types for the object.
>
> example:
>
> CONSTRUCT
>   {
>  ?p ?o .
>   }
> WHERE
>   {  ?p  ?o
>   }
>
> Is there a method in the JDBC driver that will allow me to determine
> what
> that type is?  Parsing string -vs- URI is rather difficult. :(
>
> Thx,
> Claude
>
> --
> I like: Like Like - The likeliest place on the web
> <http://like-like.xenei.com>
> LinkedIn: http://www.linkedin.com/in/claudewarren
>
>
>
>
>
>


-- 
I like: Like Like - The likeliest place on the web
<http://like-like.xenei.com>
LinkedIn: http://www.linkedin.com/in/claudewarren


  1   2   3   >