Re: Attaching JSON-LD to web page for Google SEO

2018-08-16 Thread Damian Steer


> On 16 Aug 2018, at 14:07, Mikael Pesonen  wrote:
> 
> 
> I'm exporting JSON-LD from Fuseki into web page, but Google's validation 
> (https://search.google.com/structured-data/testing-tool) doesn't accept it.

> Google's validation says
> 
> name is not a known valid target type for the name property

"@context" : { "name": "http://schema.org/name” } works, as does “@context”: 
“http://schema.org/“.

Google’s structured data support has always been idiosyncratic, unfortunately. 
It much better that it used to be, though.

It’s worth reporting to them, and perhaps asking on stack overflow? (IIRC they 
monitor stack overflow for issues)

Regarding fuseki I think you’ll have to put something in to transform the 
json-ld, unfortunately. I’ve found I have to control json-ld serialisation 
fairly carefully in practice, since tools are not always fully compliant 
json-ld processors.

In principle framing [1] ought to do the trick, but last time I tried it wasn’t 
fully working (although it may well be time to revisit that).

Damian

[1] <https://json-ld.org/spec/latest/json-ld-framing/>

-- 
Damian Steer
Data Research Software Engineer
University of Bristol
https://www.bris.ac.uk/contact/person/getDetails?personKey=W1K8jX7rokwl8OKZUeQMmPO1FXAvt0







Re: RDFa ...

2018-01-16 Thread Damian Steer


> On 13 Jan 2018, at 17:42, Jean-Marc Vanel <jeanmarc.va...@gmail.com> wrote:
> 
> Hi
> 
> Good news!
> I started upgrading the project java-rdfa .
> Here is my fork:
> https://github.com/jmvanel/java-rdfa/commits?author=jmvanel

Ooh blimey, good luck with that.

I gave up for a few reasons. Mostly time, of course. I think the streaming 
approach was wrong - RDFa isn’t defined like that, and it made writing the 
parser awkward. Trying to support 1.0 and 1.1 simultaneously was fiddly (and 
probably not worth the effort). Also I didn’t really like the spec, especially 
1.1, and the test suite became awkward to use.

Good luck with it!

Damian

-- 
Damian Steer
Senior Technical Researcher
University of Bristol
https://www.bris.ac.uk/contact/person/getDetails?personKey=W1K8jX7rokwl8OKZUeQMmPO1FXAvt0





Re: JenaJung new version requried

2016-01-17 Thread Damian Steer

> On 17 Jan 2016, at 12:05, Andy Seaborne <a...@apache.org> wrote:
> 
> On 16/01/16 15:42, Prasad Priyadarshana Fernando wrote:
>> Hi,
>> 
>> Does any one know about a latest version of jena JUNG
>> 
>> Thanks
>> 
>> *Prasad Priyadarshana Fernando <http://www.linkedin.com/in/prasadfernando>*
>> Mobile: +1 330 283 5827
>> 
> 
> (This work isn't part of the Apache Jena project)

Yep, little demo I did years ago.

> It shouldn't be too hard to port it to Jena3.  It will be mostly the package 
> imports renaming.

Just updated it. Yes, code changes are simply renaming imports.

Hope it works ok,

Damian

-- 
Damian Steer
Senior Technical Researcher
Research IT
+44 (0) 117 39 41724



Re: Convert .ttl turtle format file into .rdf file

2015-08-06 Thread Damian Steer

 On 6 Aug 2015, at 10:13, Marko Pance mpa...@chemaxon.com wrote:
 
 I'm sorry, I don't quite understand your response. Are you saying if I'd like 
 to stream I should try using a format other than rdf/xml? In oder to do so, 
 could I use the command of: 
 
 
 
 bin/riot --out rdfthrift ~/Downloads/chembl_20.0_molecule.ttl  
 ~/Downloads/chembl_20.0_molecule.rdf 

Essentially yes.

My reading of ‘—stream’ is that it is the same as ‘—out’ but with the 
additional requirement that the format should support streaming.

 What would go in place of rdfthrift”? 

Depends on what you want to do with the result. If you want rdf/xml then you 
can’t stream (currently). If you want another format, well, you already have 
turtle of course.

ntriples is a solid format generally.

Damian

-- 
Damian Steer
Senior Technical Researcher
Research IT
+44 (0) 117 928 7057



Re: TDB size

2015-02-12 Thread Damian Steer
On 12/02/15 13:49, Trevor Donaldson wrote:
 On Thu, Feb 12, 2015 at 6:32 AM, Trevor Donaldson tmdona...@gmail.com
 wrote:

 Hi,

 I am in the middle of updating our store from RDB to TDB. I have noticed
 a significant size increase in the amount of storage needed. Currently RDB
 is able to hold all the data I need (4 third party services and 4 years of
 their data) and it equals ~ 12G. I started inserting data from 1 third
 party service, only 4 months of their data into TDB and the TDB database
 size has already reached 15G. Is this behavior expected?

Hi Trevor,

How are you measuring the space used? TDB files tend to be sparse, so
the disk use reported can be unreliable. Example from my system:

6.2M [...] 264M [...] GOSP.dat

The first number (6.2M) is essentially the disk space taken, the second
(264M!) is the 'length' of the file.

Damian

-- 
Damian Steer
Senior Technical Researcher
Research IT
+44 (0) 117 928 7057


Re: Jena / Stanbol success stories?

2014-11-26 Thread Damian Steer
On 26/11/14 16:21, Rob Vesse wrote:
 Stian
 
 Putting up such a page is somewhat frowned upon at the ASF

If you search for 'powered by' you'll find quite a number of pages like
this at apache:

https://www.google.co.uk/search?q=powered+by+site%3Aapache.org

They seem to be fairly simple lists of uses, without (as you say) any
implied endorsement or promotion.

Damian

-- 
Damian Steer
Senior Technical Researcher
Research IT
+44 (0) 117 928 7057


Re: Bug SPARQL parser ?

2014-10-08 Thread Damian Steer
On 08/10/14 15:06, Julien Plu wrote:
 Hi,
 
 I think I found a little bug in the SPARQL parser for Jena 2.12.0. When I
 run the query :
 
 SELECT count(distinct ?s) as ?count WHERE {?s ?p ?o}

You need parentheses around the ... as ...:

SELECT (count(distinct ?s) as ?count) WHERE {?s ?p ?o}

Damian


Re: SPARQL1.1 query over Virtuoso repository through Jena

2014-04-10 Thread Damian Steer
On 10/04/14 06:15, Anila Sahar Butt wrote:
 PREFIX csiro:http://au.csiro.browser#
 PREFIX rdfs:http://www.w3.org/2000/01/rdf-schema#
 PREFIX owl:http://www.w3.org/2002/07/owl#
 PREFIX rdf:http://www.w3.org/1999/02/22-rdf-syntax-ns#
 
 SELECT ?property ?propLabel ?range ?rangeLabel
 WHERE {{{http://xmlns.com/foaf/0.1/Agent rdfs:subClassOf+ ?res. ?res
 owl:onProperty ?property.} UNION {{http://xmlns.com/foaf/0.1/Agent
 rdfs:subClassOf+ ?class. ?property rdfs:domain ?class.} UNION {?property
 rdfs:domain http://xmlns.com/foaf/0.1/Agent}} } {{?res ?resProp
 ?range.FILTER ((?resProp = owl:allValuesFrom) || (?resProp =
 owl:someValuesFrom) )} UNION {?property rdfs:range ?range.}} OPTIONAL
 {?property rdfs:label ?propLabel.} Optional {?range rdfs:label
 ?rangeLabel.}}

I just tried this using the sparql.org validator (which uses jena) and
it's fine.

What error are you getting?

Damian


Re: Incorrect output: Request guidance

2013-12-26 Thread Damian Steer

On 26 Dec 2013, at 20:51, Rose Beck rosebeck...@gmail.com wrote:

 I created my data file containing the following data(try.nq):
 
 http://dbpedia.org/data/Plasmodium_hegneri.xml 
 http://code.google.com/p/ldspider/ns#headerInfo
 _:header16125770191335188966549  a .

Ah, here is the issue.

N-Quads _doesn't_ permit relative URIs / IRIs. [1] TDB is being kind / 
unhelpful and loading the data as requested, full of relative IRIs.
This is, strictly, broken RDF. The behaviour when you work on it is as a 
consequence undefined.

(If you run the data through validation this issue is apparent:

$ riot --validate try.nq 
ERROR [line: 1, col: 133] Relative IRI: a
...)

 Then I fired the following SPARQL command:
 root@server:/home/apache-jena-2.10.0/bin# ./tdbquery --time

...

 I got the following output:
 -
 | a   | b   | c |
 =
 | a | b |  

The answer is a) correct but b) very unhelpful. These are relative IRIs but no 
base is given. 

 After this I tried another SPARQL query(given below) for which I obtained
 an incorrect output:
 SPARQL query:
 root@server:/home/apache-jena-2.10.0/bin# ./tdbquery --time
 --loc=/home/Jena/try select ?a?b?c where{ graph ?j1{?a b  
 http://dbpedia.org/data/Plasmodium_hegneri.xml} }
 Output:
 -
 | a | b | c |
 =
 -
 Time: 0.095 sec

SPARQL, unlike N-Quads, allows relative IRIs. In the absence of a BASE 
directive the IRIs are resolved relative to the query itself. In this case the 
current directory is used as the base, so b is understood as CURRENT_DIR/b. 
You can see this if you add --explain:

$ tdbquery --explain --loc=try select ?a?b?c where{ graph ?j1{?a b  
http://dbpedia.org/data/Plasmodium_hegneri.xml} }
00:58:24 INFO  exec :: QUERY
  SELECT  ?a ?b ?c
  WHERE
{ GRAPH ?j1
{ ?a b http://dbpedia.org/data/Plasmodium_hegneri.xml }
}
00:58:24 INFO  exec :: ALGEBRA
  (project (?a ?b ?c)
(quadpattern (quad ?j1 ?a file:///private/tmp/b 
http://dbpedia.org/data/Plasmodium_hegneri.xml)))
00:58:24 INFO  exec :: Execute ::   (?j1 ?a 
file:///private/tmp/b http://dbpedia.org/data/Plasmodium_hegneri.xml)
-
| a | b | c |
=
-

b is resolved to file:///private/tmp/b (I ran this in the temp dir). Thus 
no results.

So the short answer is that the input data is broken.

Damian

[1] http://www.w3.org/TR/n-quads/#sec-iri
[2] http://www.w3.org/TR/2013/REC-sparql11-query-20130321/#relIRIs

Re: SDBException

2013-11-07 Thread Damian Steer

On 7 Nov 2013, at 09:45, james.c...@tessella.com wrote:

 Hi
 
 I'm pretty new to Jena/SDB but have been having a problem running on 
 Oracle with
 the LayoutTripleNodesIndex database layout.
 
 I've been seeing the following error when saving triples.
 
 My question are:
 
 what is the NNODEQUADS table used for? and what data is stored in the N2 
 column? and why is it set 
 as a NVARCHAR(10 CHAR)?

NNODEQUADS is a temporary table used for bulk loading. It's essentially a 
denormalised (well, pre-normalised) quad store, laid out like:

[ Node1 values , Node2 values , Node3 values , Node4 values ]

Where each node is made of hash, lex, lang, datatype, ...

The N2 column would be 'lang', so it sounds like you have a very large lang 
value somewhere. Does that sound likely?

Damian

Re: Date Query Performance

2013-09-11 Thread Damian Steer

On 11 Sep 2013, at 15:04, Iain Ritchie iainritc...@gmail.com wrote:

 Hello,
 
 I am executing the following count(*) query multiple times for a sliding
 date range i.e. count for 1st September, 2nd September etc. Can anyone
 suggest a more efficient way of doing this since I have to execute the
 query a substantial number of times?
 
 PREFIX xsd:http://www.w3.org/2001/XMLSchema#
 SELECT (count(*) as ?count) WHERE
 {
 ?x http://dateTime ?datetime
 FILTER ( 2013-08-02T00:00:00+01:00^^xsd:dateTime  ?datetime  ?datetime
 2013-08-01T00:00:00+01:00^^xsd:dateTime)
 }
 
 Many Thanks.

I'd try something of the form:

SELECT ?date  (count(*) as ?count)  WHERE
{
?x http://dateTime ?datetime
}
GROUP BY (substr(str(?datetime), 1, 10) as ?date)

Roughly: pick a partitioning function, and count grouped by that. In this case 
I've used the first 10 characters of the datetime to get the day.

You could use and extension function here if that rather ropey 'day' isn't 
quite right.

Damian



Re: RDFDataTyping in a statement

2013-07-23 Thread Damian Steer

On 23 Jul 2013, at 12:16, Phil Ashworth pashwor...@gmail.com wrote:

 Sorry if these are trivial questions.

Not a problem.

 The triple written to the file is
 
 http://me.org#myresource http://me.org#myproperty false ;
 
 For boolean I don’t see the data type written out
 
 i.e. I was expecting
 http://me.org#myresource
 
 http://me.org#myproperty “false”^^
 http://www.w3.org/2001/XMLSchema#boolean ;
 
 
 
 Am I missing something?

Yes! Note that the output is:

http://me.org#myresource http://me.org#myproperty false 

not:

http://me.org#myresource http://me.org#myproperty false

(no quotes in the first one)

The former is the literal boolean value that could also be written as 
“false”^^http://www.w3.org/2001/XMLSchema#boolean.

In other words, it's what you want, just written in a form you weren't 
expecting.

 One further question on this (sorry)
 
 In the above examples can
 
 String objecttype = “http://www.w3.org/2001/XMLSchema#string”;
 
 Be Shortened to
 
 String objecttype = “xsd:string”;

Good question. I don't think that works.

Damian

Re: SPARQL interaction of OPTIONAL, FILTER, and BIND

2013-07-10 Thread Damian Steer

On 9 Jul 2013, at 21:01, Joshua TAYLOR joshuaaa...@gmail.com wrote:

 prefix : http://example.org/
 select ?element ?index ?element2 ?index2 where {
  ?element :atIndex ?index .
  OPTIONAL {
BIND( ?index - 1 as ?index2 )
?element2 :atIndex ?index2 .
  }
 }
 order by ?index
 
 
 produces lots more results; 

SPARQL is evaluated inside out, so start with:

  {
BIND( ?index - 1 as ?index2 )
?element2 :atIndex ?index2 .
  }

?index is unbound, so ?index2 isn't bound by that first bit. Effectively you 
just have:

{ ?element2 :atIndex ?index2 }

OPTIONAL does nothing, since there's no shared variable, so you end up with a 
cross join:

{ ?element :atIndex ?index  }
{ ?element2 :atIndex ?index2 }

i.e. every combination.

 Now, using a FILTER in the OPTIONAL:
 
 
 prefix : http://example.org/
 
 select ?element ?index ?element2 ?index2 where {
  ?element :atIndex ?index .
  OPTIONAL {
FILTER( ?index - 1 = ?index2 )
?element2 :atIndex ?index2 .
  }
 }
 order by ?index
 
 
 produces results that actually show that ?index2 is constrained, and
 are almost the same the as the first query (except that the case where
 ?index2 is -1 doesn't occur):

Ok, _now_ I'm scratching my head.

Damian

Re: SPARQL interaction of OPTIONAL, FILTER, and BIND

2013-07-10 Thread Damian Steer

On 10 Jul 2013, at 10:07, Damian Steer d.st...@bris.ac.uk wrote:

 
 On 9 Jul 2013, at 21:01, Joshua TAYLOR joshuaaa...@gmail.com wrote:
 
 prefix : http://example.org/
 
 select ?element ?index ?element2 ?index2 where {
 ?element :atIndex ?index .
 OPTIONAL {
   FILTER( ?index - 1 = ?index2 )
   ?element2 :atIndex ?index2 .
 }
 }
 order by ?index
 
 
 produces results that actually show that ?index2 is constrained, and
 are almost the same the as the first query (except that the case where
 ?index2 is -1 doesn't occur):
 
 Ok, _now_ I'm scratching my head.

Just to expand on why I'm scratching my head. It really, really looks like 
?index is bound to 0 in the optional block.

I expected it to act like FILTER( ?unbound - 1 = ?index2 ) (always false), but 
that's not the case since you'd never find a solution where ?element2 and 
?index2 are bound.
Then I wondered whether the filter was being moved to the outer block, but 
that's rather different again -- you're filtering a cross product essentially, 
and won't get 'ragged' results (ones where some variables are unbound).

I'm voting 'bug'.

Damian



Re: SPARQL interaction of OPTIONAL, FILTER, and BIND

2013-07-10 Thread Damian Steer

On 10 Jul 2013, at 12:36, Joshua TAYLOR joshuaaa...@gmail.com wrote:

 I know that when there are *sub*-queries in a query, they're evaluted
 from the inside out (i.e., innermost subqueries are evaluated first),
 but this doesn't make sense for OPTIONAL patterns, does it?  The whole
 point of an optional pattern is to *extend* the solutions of the
 enclosing pattern if possible.

They are extended in the join. Evaluate one block, evaluate other block, join. 
What order this happens in may vary, as you say, but the answers are supposed 
to be equivalent.

I'm still fuzzy looking at the spec on the FILTER case, but on BIND here are a 
couple of fun facts:

* 'Use of BIND ends the preceding basic graph pattern'

OPTIONAL {
   BIND( ?index - 1 as ?index2 )
   ?element2 :atIndex ?index2 .
 }

BIND here is doing nothing at all, since it closes the block and there's 
nothing preceding it. If you look at the output of --explain you'll find

(extend ((?index2 (- ?index 1)))
(table unit))

i.e. it's acting on nothing.

If you add it after the ?element2 triple you'll get an error, since ?index2 is 
already bound.

* You can rearrange your second query to get what you want:

OPTIONAL {
   ?element2 :atIndex ?index2 .
   BIND(  ?index2 + 1 as ?index)
 }

---
| element | index | element2 | index2 |
===
| :a  | 0 |  ||
| :b  | 1 | :a   | 0  |
| :c  | 2 | :b   | 1  |
| :d  | 3 | :c   | 2  |
---

Damian

Re: Unexpectedly slow query

2013-05-15 Thread Damian Steer

On 15 May 2013, at 14:10, huey...@aol.com wrote:

 Hello,
 
 
 
 I am using the following Sparql query against a TDB store:
 
 SELECT * 
 WHERE {
   ?pat a nci:Patient .
   ?pat ec:Has_Id ?patId .
   ?findingProp rdfs:subPropertyOf ec:Has_Finding .
   ?pat ?findingProp ?finding .
   ?finding a ?findingType .
 }
 
 
 When I run this query WITHOUT the last triple (the bolded line), it returns 
 the correct result within seconds.
 
 But when I run this query WITH the last triple, the query runs a very long 
 time. I do not know how long b/c I cancelled it after 1 hour.

I wonder wether the optimiser sees '?finding a ?findingType .' as more ground 
than the previous, and thus reorders the query?

Have a look at [1] which explains some of the diagnostic features of TDB. 
Getting hold of the query plan would be very useful.

Damian

[1] 
https://jena.apache.org/documentation/tdb/optimizer.html#Investigating_what_is_going_on





Re: Sparql Update Delete with no explicit graph

2013-05-10 Thread Damian Steer
On 10/05/13 13:13, Cekov, Luchesar wrote:
 Hi there,

Hi,

 I started using Jena TDB and Fuseki just recently and I am trying to post 
 some Sparql Update Delete statements.
 I am experiencing a problem when deleting with no explicit graph specified.
 
 DELETE { http://s/1  ?p ?o } WHERE { http://s/1  ?p ?o .}
 
 The above will not delete any statements that have been inserted in a 
 specific graph.

 
 WITH http://test/uri
 DELETE { http://s/1  ?prop ?obj  }  WHERE {http://s/1  ?prop ?obj}
 
 
 The statements get really deleted!
 
 
 According to Sparql 1.1 Update spec [1] the default graph should be used if 
 there is no graph explicitly specified in the update query which I reason 
 means that the statements above should be deleted even if a GRAPH is not 
 explicitly specified. Furthermore if I query with no explicit graph specified 
 like:
 
 
 select * where {http://s/1 ?p ?q}
 
 
 I get all the statements from above.

Your store is configured so that the default graph is a union of all the
named graphs, rather than a distinct graph in itself. For queries that's
fine, but for update operations you need to be specific about where the
data is stored.* So try:

DELETE { GRAPH ?g { http://s/1  ?p ?o } } WHERE { GRAPH ?g {
http://s/1  ?p ?o  } }

instead, which remembers where the triple was found.

Damian

* What you want could be made to work, I think. You've just bumped into
the limits of the current illusion.


Re: Concurrency in Jena/SDB

2013-04-15 Thread Damian Steer

On 15 Apr 2013, at 14:38, David Jordan david.jor...@sas.com wrote:

 
 So every call to a method of Model or OntModel is done in a separate 
 transaction? This could easily explain the poor performance I am getting, and 
 those of others who have complained about SDB performance in this group.

Poor performance typically has more to do with the cumulative overhead of large 
numbers of small operations than transactions per se. 

SDB tries to queue up added or removed triples in large chunks (c 20,000 
triples), and execute them in a small number of RDB operations. Each call to 
add or remove will be a single chunk, so it's best to make them as big as 
possible.

The model.notifyEvent(GraphEvents.startRead) business is a way to take some 
control of that queuing and allow the queue to cross method boundaries. So, for 
example:

while (condition) { if (condition) model.add(statement); }

would benefit immensely from being wrapped in start/finishRead.

The problem is that you are queuing client side, so peeking at the contents of 
the model is a bad idea: the triples may not have been added.

Damian

Re: custom algebra optimizer

2013-03-22 Thread Damian Steer

On 22 Mar 2013, at 17:01, Diogo FC Patrao djogopat...@gmail.com wrote:

 I'm rendering back the query, implementing the OpAsQuery missing methods.
 However I got another doubt, check the query below:
 
 select ?p count( ?b ) { ?p a http://marafo.com#Paciente. ?a 
 http://marafo.com#tem ?b. LET ( ?marafo := ?b+1 ) } group by ?p
 
 
 The generated algebra is:
 
 (project (?p ?.1)
 
  (assign ((?.1 ?.0))
 
(group (?p) ((?.0 (count ?b)))
 
 I didn't get why is there this (assign (?.1 ?.0) )  thing... can someone
 enlighten me?

AFAIK it's simply an artifact of way the compiler works.

OpAsQuery handles this by remembering the aggregation as it goes (in 
varExpression) and undoes the split later:

https://github.com/apache/jena/blob/trunk/jena-arq/src/main/java/com/hp/hpl/jena/sparql/algebra/OpAsQuery.java#L523

https://github.com/apache/jena/blob/trunk/jena-arq/src/main/java/com/hp/hpl/jena/sparql/algebra/OpAsQuery.java#L56

Damian

Re: Fuseki on virtual triples

2012-11-26 Thread Damian Steer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 23/11/12 17:12, Dimitris Spanos wrote:
 Hello all,

Hi Dimitris,

Not sure you got a reply to this.

 I have built a custom QueryEngine that enables SPARQL access to a 
 relational database ...

This is read only, I take it?

 If I want to use Fuseki, do I absolutely need to create a custom 
 implementation of Graph? Which Graph methods are actually used by 
 Fuseki and are the ones that I would have to implement? Just 
 graphBaseFind() or something more?

- From what I remember you'll need an actual Graph for DESCRIBE, but
that's about it (assuming your query engine is complete).

However given that you've implemented querying the Graph interface
ought to be trivial.

(There's a thread on jena-dev discussing the simplification of Graph)

Damian
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with undefined - http://www.enigmail.net/

iEYEARECAAYFAlCzQWQACgkQAyLCB+mTtyldcACgrDcwGIepiaoh+FnWBSFFaGrS
A+gAn129tpja2RqSy8ITkhpGogc91V5A
=bBAG
-END PGP SIGNATURE-


Re: Feeding triples from java-rdfa-0.4.2 + Jena into fuseki-0.2.5-20120916.055428-41

2012-10-24 Thread Damian Steer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 24/10/12 13:52, Andy Seaborne wrote:

 Not knowing how java-rdfa works, I guess that it is creating
 langtags directly.  It does not hook into RIOT.  The validation
 code only works for RIOT parsing (NT, Turtle, etc)

Yep, it creates lang tags directly.

Any pointers on hooking it into RIOT?

Damian

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://www.enigmail.net/

iEYEARECAAYFAlCID0IACgkQAyLCB+mTtymD/ACfdf528mL/tKO1ONW5F3Y5AITj
j30An22U2WQHKshH6ozp8xpgQ3iVq/53
=Frxo
-END PGP SIGNATURE-


Re: com.hp.hpl.jena.query.QueryExecution.close()

2012-08-27 Thread Damian Steer

On 27 Aug 2012, at 17:17, Rob Vesse rve...@yarcdata.com wrote:

 On 8/26/12 11:18 AM, Andy Seaborne a...@apache.org wrote:

 And on a related note, I wonder if execSelect or even deeper in
 HttpQuery.exec should read the entire response, and not try to do
 end-to-end streaming.  That way, a slow/bad application can't affect the
 remote server by holding connections open for too long.  Obvious down
 side is that things are resource limited
 
 I would disagree, while this is a useful idea in principal and in some use
 cases it quickly falls down as soon as you have moderately large results
 with a OOM exception.

Would it be possible to use a buffer? For small-ish result sets you would get 
the behaviour Andy suggests, but avoid the OOM issue.

Damian

Re: com.hp.hpl.jena.query.QueryExecution.close()

2012-08-27 Thread Damian Steer

On 27 Aug 2012, at 18:51, Stephen Allen sal...@apache.org wrote:

 Would it be possible to use a buffer? For small-ish result sets you would 
 get the behaviour Andy suggests, but avoid the OOM issue.
 
 
 Something like setFetchSize() [1]?  Oracle [2] defaults to 10 rows,
 while PostgreSQL [3] and MySQL [4] both buffer the entire result.

Yep. In SDB we've tripped over the default behaviour of MySQL many times.

Damian

Re: SPARQL Update on default RDF model

2012-08-08 Thread Damian Steer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 08/08/12 07:51, Enes Bulut wrote:
 Hi all,

 First I create a default model.
 
 Model model= new ModelFactory.createDefaultModel(); ...
 
 
 Query string is something like that:
 
 String queryStr = PREFIX foaf:  http://xmlns.com/foaf/0.1/  +
 WITH  + http://example/addresses; +

You're trying to update a named graph (WITH), but you only have a
default graph. Drop the WITH.

 UpdateRequest upReq = UpdateFactory.create(); upReq.add(queryStr); 
 UpdateAction.execute(upReq, model.getGraph);

Damian
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAlAiOOIACgkQAyLCB+mTtylvqwCglzrXiCkBgCLVbQarLp7rKkoo
83sAoJ9WOYrzCfF0Mg6PQeSjEGDeZyrL
=i4Np
-END PGP SIGNATURE-


Re: How to convert assign URL to blank node?

2012-06-26 Thread Damian Steer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/06/12 09:20, Andy Seaborne wrote:


You can use ResourceUtils.renameResource(oldResource, uri) [1] to
achieve the same effect. Behind the scenes this removes old statements
using oldResource and makes new ones with uri.

Damian

[1]
http://jena.apache.org/documentation/javadoc/jena/com/hp/hpl/jena/util/ResourceUtils.html
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk/pfYcACgkQAyLCB+mTtynRqgCfW8H0ZvWhjCKfa+FawTl0Gq83
WlAAmwYf+tWUhAukVEunHyDTE60x+AeW
=KF6y
-END PGP SIGNATURE-


Re: How to convert assign URL to blank node?

2012-06-26 Thread Damian Steer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 26/06/12 09:20, Andy Seaborne wrote:
 On 26/06/12 01:30, franswors...@googlemail.com wrote:
 How can I assign an URI to a blank node? The Resource class only 
 provides getURI() or getId() methods, but the URI can't be set. 
 Do I have to create a new Resource, copy all properties and 
 delete the original node?
 
 
 Yes, you create a new resource.  Resources are immutable - you 
 can't modify them after creation.

You can use ResourceUtils.renameResource(oldResource, uri) [1] to
achieve the same effect. Behind the scenes this removes old statements
using oldResource and makes new ones with uri.

Damian

[1]
http://jena.apache.org/documentation/javadoc/jena/com/hp/hpl/jena/util/ResourceUtils.html
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk/pfYgACgkQAyLCB+mTtyldTQCgsEH+RrgHeXhntyRjKw+M/JoD
eoIAoM4CbsZoSZjrmD9AQAgEaJghmcdB
=548K
-END PGP SIGNATURE-


Re: Strange behaviour of XMLLiterals in RDF/XML

2012-06-25 Thread Damian Steer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 25/06/12 13:34, Andy Seaborne wrote:

 The best RDF-WG is going to do is make XMLLiteral less mandatory.

'Less mandatory'? :-)

I was writing a similar reply as this came in. It's horrible trying to
explain it, and it will be nice not to have to do that post-rdf 1.1.

Damian

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk/oXQIACgkQAyLCB+mTtymq8wCfW3+7CMm6uHdJhHJ+hbqbWrE3
V/oAoOlmJJfrM1k3brwi1p+j+fswdQrf
=x69P
-END PGP SIGNATURE-


Re: importing ntriples into tdb without stop at an error

2012-06-13 Thread Damian Steer
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 13/06/12 14:03, Stefan Scheffler wrote:
 Hello, I need to import large n-triple files (dbpedia) into a tdb.
 The problem is, that many of the triples are not valid (like
 missing '' or invalid chars) and leading to an exception which
 quits the import... I just want to skip them and continue, so that
 all valid triples are in the tdb at the end.
 
 Is there a possibility to do that easily? I tried to rewrite the
 ARQ, but this is very complex With friendly regards Stefan
 Scheffler
 

You'd be much better off finding an n-triple parser that kept going
and also spat out (working) n-triples for piping to TDB. I can't see
an option like that in the riot command line.

Damian
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk/Yk1UACgkQAyLCB+mTtynCxwCdGO4xFNd3sJaLqFGGRzMtMaqH
p+kAn0tS4RXd/1iroz+UuahFefyjfxbq
=2jgU
-END PGP SIGNATURE-