Re: Memory leak in TDB using a single Dataset object

2016-02-04 Thread Andy Seaborne

On 04/02/16 10:41, Jean-Marc Vanel wrote:

Comment interleaved.


Ditto



2016-02-04 10:26 GMT+01:00 Andy Seaborne :


On 04/02/16 08:15, Jean-Marc Vanel wrote:


Sorry for being vague.
The RAM usage is growing, until crashing with an Out Of Memery exception.



TDB uses a bounded amount of caching, though the journal workspace can
grow.



So, if TDB uses a bounded amount of caching *in memory* , there is nothing
against using the same Dataset object as singleton.
The journal workspace you mention is on disk , isn't it ? My problem is not
on disk at all.



If there are lots of large literals, you'll need more heap.



  The largest literals involved in the transactions are dbPedia abstracts; I
would not call that "large" literals.
Anyway the problem happens sooner or later, raising the memory does not
help.


The transaction system in TDB1 keeps up to 10 transaction buffered : you

can switch that off with:

TransactionManager.QueueBatchSize = 0 ;

then commits are flushed back to the main database as soon as possible.
That needs no readers about.



I'll try that too, and report .



If you have a reader that doesn't commit/end properly, the system can
never write to the main database.



That should not be possible, when I use a Scala construct that wraps the
fragment of code inside a transaction and automatically calls commit or end.


Ptr?

And you don't pass out anything that is hanging onto the state inside 
the transaction?


Are you using (indirectly) the Jena model API or using SPARQL?



However, there some reads on the database that happen outside a transaction.
This covers navigation by find() on a  graph .
During developments, everytime a runtime exception said "outside a
transaction" , I fixed that. So the other cases were left outside a
transaction; it is wrong ?


Yes.

(I though it complained once in transaction mode if a non-transaction 
operation happened)




...

It is a disk-backed dataset, not an TDB memory one?




Yes disk based.


Ok - it sounds like either the journal is locked or

You can try turning logging on for TransactionManager (level "DEBUG").

Andy






Re: Memory leak in TDB using a single Dataset object

2016-02-04 Thread Rob Vesse

On 04/02/2016 10:41, "Jean-Marc Vanel"  wrote:

>The journal workspace you mention is on disk , isn't it ? My problem is
>not
>on disk at all.

No

The journal is both in-memory and on-disk as it is a Write ahead log for
failure recovery purposes, the disk portion preserves the data for failure
recovery while the in-memory portion provides data to the live instance.

If there is a non-empty journal on disk then there is a corresponding
amount of memory within the JVM heap used to store the latest state of the
data for subsequent transactions while not overwriting the old state of
the data which ongoing transactions may still be accessing

Rob






Re: TDB suddenly seems to use more CPU

2016-02-04 Thread Laurent Rucquoy
Thank you very much for your explanations.

Regards,
Laurent

On 21 January 2016 at 11:53, Andy Seaborne  wrote:

> On 20/01/16 10:38, Laurent Rucquoy wrote:
>
>> Hi,
>>
>> 1 - About the direct mode:
>> Yes, the TDB is running in direct mode, but I have no explanation about
>> why
>> it has been explicitly set in our application source code.
>> 1.1 - What will change in our application if I remove the
>> TDB.getContext().set(SystemTDB.symFileMode, FileMode.direct); line ?
>>
>
> Firstly - I use TDB in Linux, not Windows, so I'm looking to hear of
> people's experiences.  This is based on what I have heard ...
>
> On windows, the difference in performance between direct and mapped modes
> seems to be much less (near zero?) than Linux.
>
> And you can't delete databases while the JVM using DB is alive.  This is a
> very long standing java issue (see the Java bug tracker - it is in there
> several times in different reports).
>
> The TDB test cases suffer from this - they use a lot of temporary space as
> a new DB is made for each test rather than delete-reuse the directory.
>
> 1.2 - Is there a default mode which will suit for classical cases ?
>>
>
> The only real way to know the right setting for you is to try it.  Your
> data, the usage patterns and the size may all be factors.
>
> That said, I don't remember any reports to suggest that other than the
> "delete database" issue, it makes much difference until data sizes go up.
>
> In the version you are running (which is quite old), it is hard to tune
> the cache sizes.  In mapped mode there are no index caches to manages - it
> flexes automatically (the OS does it - not that TDB has some built-in
> smarts).
>
> 1.3 - Is it possible that this 'forced' direct mode could be the cause of
>> our CPU high-usage issue ?
>>
>
> There is one possibility which is that the GC is under pressure; if you
> are close to max heap, it may be working hard to keep memory available.
>  There is no specific indication of this one way or the other in your
> report; it is just a possibility.  Solution - increase heap by 25%-50% and
> see what happens.
>
>
>>
>> 2 - About the environment:
>> OS: Windows Server 2008 R2 64bit (Virtual Machine)
>> Java: 1.7.0_67-b01 (64bit)
>>
>
> VMs can be affected by what else the real hardware is hosting.  Suppose
> the hardware is busy - your VM only gets it's allocated %-age of the CPU
> time, whereas when not busy your VM may be getting a lot more than the
> "contract" amount.  Result - requests take a bit longer and that has a
> knock-on effect of more results being active at any one time causing more
> CPU for your VM to be needed.  But again, only a possibility.
>
>
>>
>> 3 - About the data and the query
>> The changes on the data occur through Jenabean save calls (the underlying
>> object model has not changed.)
>> The query at the point logged in the dump messages is:
>>
>> PREFIX base: 
>> PREFIX rdf: 
>> PREFIX xmls: 
>> SELECT ?x
>> {
>> ?image base:sopInstanceUID
>> "x.x.xx.x..x.x.x.x.x"^^xmls:string .
>> ?image a base:Image .
>> ?seq ?p ?image .
>> ?x base:images ?seq .
>> ?x a base:ImageAnnotation ;
>> base:deleted false .
>> }
>>
>
> (I don't know jenabean).
>
> There is nothing strange looking about that query.
>
> If you added a lot more data, rather than steady incremental growth, it
> might have
>
> Increase RAM and increase the block caches:
> System properties:
>
> BlockReadCacheSize : default: 1 so try 25
> BlockWriteCacheSize : default: 2000 so try 5000
> NodeId2NodeCacheSize : default 50 so try 100 (1 million)
>
> these are all in-heap so increase the heap size.
>
> (changes are logged at level info so you can check they have an effect - I
> am not on my dev machine at the moment so I can't easily check details here
> I'm afraid)
>
> 4 - About the BLOCKED state:
>> Indeed it means that the thread was blocked (not using CPU) at the time of
>> the dump.
>> But looking at the threads list and corresponding CPU usage, these threads
>> were using each about 5% of the CPU, so there is only a 1 in 20 chance
>> that
>> a thread dump will catch them running.
>> Anyway, my colleague managed to get a thread dump while some of the
>> incriminated threads where running.
>> This was possible because he repeated the process a few times and he got a
>> thread that, at the time, was using about 12% of the CPU (so higher chance
>> to catch it while running).
>>
>> Here are two stacktraces (taken from the thread dump) of two threads that
>> were using a lot of CPU and that were caught running:
>>
>
> There are still an occurrence of stacked journals
>
> > at
> com.hp.hpl.jena.tdb.base.block.BlockMgrSync.release(BlockMgrSync.java:76)
> > - locked <0x000729d47240> (a
> com.hp.hpl.jena.tdb.base.block.BlockMgrCache)
> > at
> 

Re: Memory leak in TDB using a single Dataset object

2016-02-04 Thread Jean-Marc Vanel
Comment interleaved.

2016-02-04 10:26 GMT+01:00 Andy Seaborne :

> On 04/02/16 08:15, Jean-Marc Vanel wrote:
>
>> Sorry for being vague.
>> The RAM usage is growing, until crashing with an Out Of Memery exception.
>>
>
> TDB uses a bounded amount of caching, though the journal workspace can
> grow.
>

So, if TDB uses a bounded amount of caching *in memory* , there is nothing
against using the same Dataset object as singleton.
The journal workspace you mention is on disk , isn't it ? My problem is not
on disk at all.


> If there are lots of large literals, you'll need more heap.
>

 The largest literals involved in the transactions are dbPedia abstracts; I
would not call that "large" literals.
Anyway the problem happens sooner or later, raising the memory does not
help.


The transaction system in TDB1 keeps up to 10 transaction buffered : you
> can switch that off with:
>
>TransactionManager.QueueBatchSize = 0 ;
>
> then commits are flushed back to the main database as soon as possible.
> That needs no readers about.
>

I'll try that too, and report .


> If you have a reader that doesn't commit/end properly, the system can
> never write to the main database.
>

That should not be possible, when I use a Scala construct that wraps the
fragment of code inside a transaction and automatically calls commit or end.

However, there some reads on the database that happen outside a transaction.
This covers navigation by find() on a  graph .
During developments, everytime a runtime exception said "outside a
transaction" , I fixed that. So the other cases were left outside a
transaction; it is wrong ?

...

It is a disk-backed dataset, not an TDB memory one?
>

Yes disk based.

-- 
Jean-Marc Vanel
Déductions SARL - Consulting, services, training,
Rule-based programming, Semantic Web
http://deductions-software.com/
+33 (0)6 89 16 29 52
Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui


Re: shiro.ini - Admin access to anyone?

2016-02-04 Thread Andy Seaborne

On 03/02/16 22:07, Jason Levitt wrote:

Ok. For some reason there's a file named shiro.ini in the top-level
directory but it is not used. The shiro.ini file
that's used is in the "fun" directory.  Why is there a shiro.ini file
in the top-level directory


i.e FUSEKI_HOME?


if it is not used for
anything?


Normally, there is one on disk.

Fuseki hunts for shiro.ini by looking in:

FUSEKI_BASE
FUSEKI_HOME
the classpath

Normally the only copy is in FUSEKI_BASE, put there when initialized 
(there is a template copy in the server jar)


If the server is run twice with different FUSEKI_BASE, each area is setup.

Andy



J

On Wed, Feb 3, 2016 at 3:20 PM, Jason Levitt  wrote:

How can I run fuseki 2.3.x on a remote server and
allow full access to anyone who knows the IP address?

This is all that's in my shiro.ini file:

[urls]
## or to allow any access.
/$/** = anon

# Everything else
/**=anon



Anonymous users can go to http://xx.xx.xx.xx:3030
but they do not have access to administrative functions.
How can I give anonymous users access to the admin section?

-J




Re: Memory leak in TDB using a single Dataset object

2016-02-04 Thread Andy Seaborne

On 04/02/16 08:15, Jean-Marc Vanel wrote:

Sorry for being vague.
The RAM usage is growing, until crashing with an Out Of Memery exception.


TDB uses a bounded amount of caching, though the journal workspace can grow.

If there are lots of large literals, you'll need more heap.

The transaction system in TDB1 keeps up to 10 transaction buffered : you 
can switch that off with:


   TransactionManager.QueueBatchSize = 0 ;

then commits are flushed back to the main database as soon as possible. 
 That needs no readers about.


If you have a reader that doesn't commit/end properly, the system can 
never write to the main database.


If you have a system where there are always readers, it will grow but 
you don't have that setup if the below is true:



AFAIK transactions occur on the same thread started by the Play! framework
and so do not overlap.



About the "pattern of transactions" , I don't know what to answer. I there
was a questionnaire I'd be glad to answer. Also I can instrument the code
if there is some procedure.

It is running with java version "1.8.0_65" , on Ubuntu 15.10 .

The test I'm going to do is to call close() and refresh the Dataset when
reaching 80% of the maximum memory .


It is a disk-backed dataset, not an TDB memory one?

Andy




2016-02-03 23:05 GMT+01:00 Andy Seaborne :


Hi there -

"memory leak" has possible several meaning, not sure which you you mean:

* RAM usage is growing?
* Disk usage is growing?
* a specific file (the journal is growing)?

What is the pattern of transactions? (how many, do they overlap?)

 Andy


On 03/02/16 17:47, Jean-Marc Vanel wrote:


I forgot to mention that I'm still using Jena 2.13.0 , due to Banana-RDF
not having updated.


2016-02-03 18:43 GMT+01:00 Jean-Marc Vanel :

I think that the second pattern "create a dataset object on the thread",

or rather in my case
"create a dataset object for one HTTP request"
is worth trying.

And I want to know why the doc seems to prefer the first pattern.

2016-02-03 18:30 GMT+01:00 A. Soroka :

On Feb 3, 2016, at 5:13 AM, Jean-Marc Vanel 



wrote:



In the documentation,




https://jena.apache.org/documentation/tdb/tdb_transactions.html#multi-threaded-use



it is not clear which use pattern is preferred and the reason why.



The first pattern shows a single dataset object being shared between
threads, each of which operates a transaction against that object, and
the
second pattern is introduced with "or create a dataset object on the
thread
(the case above is preferred):”.

As to why, I am not familiar enough with TDB to be sure, but there is a
comment on the second pattern "Each thread has a separate dataset
object;
these safely share the same storage but have independent transactions.”
that would seem to indicate that the second pattern is vulnerable to
having
conflicts between transactions opened against the two different dataset
objects.

---
A. Soroka
The University of Virginia Library


On Feb 3, 2016, at 5:13 AM, Jean-Marc Vanel 



wrote:



I have a repeating memory leak in TDB in my web application (



https://github.com/jmvanel/semantic_forms/blob/master/scala/forms_play/README.md


).
It is caching RDF documents from internet, typically dbpedia
ressources.

It is not the use case described in "Fuseki/TDB memory leak for


concurrent


updates/queries" https://issues.apache.org/jira/browse/JENA-689 , as


the


journal is empty after crash .

A single Dataset object is used for the duration of the application,


and I


suspect this is the root cause.
In the documentation,




https://jena.apache.org/documentation/tdb/tdb_transactions.html#multi-threaded-use



it is not clear which use pattern is preferred and the reason why.

You someone confirm that keeping a single Dataset object for the


duration


of the application is bad ?







--
Jean-Marc Vanel
Déductions SARL - Consulting, services, training,
Rule-based programming, Semantic Web
http://deductions-software.com/
+33 (0)6 89 16 29 52
Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui















Re: How to remove consistently a triple pattern given a SPARQL query?

2016-02-04 Thread Carlo . Allocca
Dear Andy and All,

Thank you very much for all your suggestions and willingness.

I went through all of them again and, assembling them (changing a bit the code 
reported in the previous email)
I got a reasonable first version that is working. But I had to use arg0 instead 
of arg1 as suggested.

Please, could I ask the followings:

Is there any way to access the aggregate expressions (e.g. GROUP BY or HAVING ) 
when applying an implementation of ElementTransform?

The reason of such a question is: when applying the operation remove of the 
triple (?x2 foaf:mbox2  ?mbox2 .)  over Q1,
it would make sense to eliminate ORDER BY too as it contains a variable (?x2) 
from the triple that was eliminated and there no other triple containing such a 
variable.

Q1:

String qString8 =
 SELECT DISTINCT ?x2 ?mbox2 where "
{
   ?x foaf:name  ?name .
?x2 foaf:mbox2  ?mbox2 .
}
ORDER BY ?x2


Many Thanks in advance.
Best Regards,
Carlo




On 3 Feb 2016, at 14:55, Carlo.Allocca 
> wrote:

Dear Andy and All,

sorry for this long thread.
I am sure I was not able to put in practice some of your suggestions.

I put the full code here http://collabedit.com/wfhtq and reported below.

I tested it over the Q1 with the triple (?boss  ex:isBossOf  ?ind).

=== BEFORE Q1:

PREFIX  rdfs: 
PREFIX  ex:   
PREFIX  rdf:  

SELECT DISTINCT  ?ind ?boss ?g
WHERE
 {   { ?ind  rdf:type  ?z }
   UNION
 { ?boss  ex:isBossOf  ?ind
   FILTER ( ?boss = "mathieu" )
 }
 }



= AFTER Q1:

PREFIX  rdfs: 
PREFIX  ex:   
PREFIX  rdf:  

SELECT DISTINCT  ?ind ?boss ?g
WHERE
 {   { ?ind  rdf:type  ?z }
   UNION
 { # Empty BGP

   FILTER ( ?boss = "mathieu" )
 }
 }


But the filter is still there even if when I trace the execution it removes it.
I am doing the update between the AST and remove of the filter in

   @Override
   public Element transform(ElementGroup arg0, List arg1) {
Iterator itr = arg1.iterator();
   while (itr.hasNext()) {
   Element elem = itr.next();
…
...
}

//With this code I am saying the following: make effective all the 
modifications that //have been made so far, if any.
if (arg0.getElements() == arg1)
{ return arg0; }
else {
ElementGroup el2 = new ElementGroup();
el2.getElements().addAll(arg1);
return el2;
}

}


What am I doing wrong?

Many Thanks for your help.

Best Regards,
Carlo





=== Code

//The main Class

public class RemoveTriple {

   public RemoveTriple(){
   super();
   }

   public Query removeTP(Query q, Triple tp) {
   RemoveOpTransform rOpTransform = new RemoveOpTransform(q,tp);
   Query queryWithoutTriplePattern = QueryTransformOps.transform(q, 
rOpTransform) ;
   return queryWithoutTriplePattern;
   }
}

// The current implementation of the RemoveOpTransform class. The class that 
implements the removal of the triple

public class RemoveOpTransform implements ElementTransform {

   private Query query;
   private Triple triple;

   public RemoveOpTransform(Query q, Triple tp) {
   this.query = q;
   this.triple = tp;
   }

   @Override
   public Element transform(ElementTriplesBlock arg0) {
   System.out.println("[RemoveOpTransform::transform(ElementTriplesBlock 
arg0)] " + arg0.toString());
   System.out.println("");
   return arg0;
   }

   // This is the code related to the
   @Override
   public Element transform(ElementPathBlock eltPB) {
   if (eltPB.isEmpty()) {
   return eltPB;
   }
   Iterator l = eltPB.patternElts();
   while (l.hasNext()) {
   TriplePath tp = l.next();
   if (tp.asTriple().matches(this.triple)) {
   l.remove();
   return this.transform(eltPB);//eltPB;
   }
   }
   return eltPB;
   }


   @Override
   public Element transform(ElementGroup arg0, List arg1) {

   Iterator itr = arg1.iterator();
   while (itr.hasNext()) {
   Element elem = itr.next();
   // I should go one by one the and examinate all the possible cases. 
For example:

   //UNNION
   if (elem instanceof ElementUnion) {
   if (isUnionBothSidesEmpty) {
   itr.remove();
}
   }

   //OPTION
   if (elem instanceof ElementOptional) {
   boolean isElementOptionalEmpty = 
isElementOptionalEmpty((ElementOptional) elem);
   if (isElementOptionalEmpty) {
   itr.remove();
//ElementGroup el2 = new ElementGroup();
//el2.getElements().addAll(arg1);
//return el2;
 

Re: Unable to drop graph (Fuseki + SDB)

2016-02-04 Thread Andy Seaborne

Akhilesh,

Please can you provide a complete, minimal example including the Fuseki 
configuration together with details of your environment (version numbers 
of Jena/Fuseki/SDB, OS, webapp server, etc.)


There are too many unknowns here at the moment to accurate recreate the 
situation.


Your examples below seem to have got damaged in email.  A pastebin might 
be safer.



Andy

PS I don't have an Oracle instance for testing - I have MySQL, H2, HSQL, 
or Apache Derby.


On 04/02/16 08:30, Bangalore Akhilesh wrote:

Hi Rob,

The response remained the same even with DROP GRAPH .

Below are the sequence of requests that were issued:

Step 1:



Request

Response

POST http://localhost:8080/fuseki/oracle/update

Accept: application/sparql-results+json

Content-Type: application/sparql-update



insert data{

  graph {

 

 

 

   }

}



Status 204 No Content





Step 2:



Request

Response

POST http://localhost:8080/fuseki/oracle/query

Accept: application/sparql-results+json

Content-Type: application/sparql-query



select ?g ?s ?p ?o

{

   graph ?g

   {

 ?s ?p ?o

   }

}



Status 200 OK



{

   "head": {

 "vars": [

   "g",

   "s",

   "p",

   "o"

 ]

   },

   "results": {

 "bindings": [

   {

 "g": {

   "type": "uri",

   "value": "urn:providers:search:google"

 },

 "s": {

   "type": "uri",

   "value": "http://www.google.com;

 },

 "p": {

   "type": "uri",

   "value": "http://www.google.com#tab;

 },

 "o": {

   "type": "uri",

   "value": "http://www.google.com/images;

 }

   }

 ]

   }

}





Step 3:



Request

Response

POST http://localhost:8080/fuseki/oracle/update

Accept: application/sparql-results+json

Content-Type: application/sparql-update



drop graph 



Status 204 No Content





Step 4:



Request

Response

POST http://localhost:8080/fuseki/oracle/query

Accept: application/sparql-results+json

Content-Type: application/sparql-query



select ?g ?s ?p ?o

{

   graph ?g

   {

 ?s ?p ?o

   }

}



Status 200 OK



{

   "head": {

 "vars": [

   "g",

   "s",

   "p",

   "o"

 ]

   },

   "results": {

 "bindings": [

   {

 "g": {

   "type": "uri",

   "value": "urn:providers:search:google"

 },

 "s": {

   "type": "uri",

   "value": "http://www.google.com;

 },

 "p": {

   "type": "uri",

   "value": "http://www.google.com#tab;

 },

 "o": {

   "type": "uri",

   "value": "http://www.google.com/images;

 }

   }

 ]

   }

}

As you can see, the graph is still available!

I have also tried the below request but the graph still remained.

Request

Response

DELETE
http://localhost:8080/fuseki/oracle/graph?graph=urn:providers:search:google



Status 204 No Content



Thanks,
Akhilesh

On Tue, Feb 2, 2016 at 7:03 PM, Rob Vesse  wrote:


How do you verify that the graph is still present?

Also what happens if you run DROP GRAPH ?

The SILENT keyword allows for an operation to fail but to ignore the
failure and return success so if something is going wrong removing the
SILENT keyword allows for the error to be propagated.

Rob

On 02/02/2016 12:59, "Bangalore Akhilesh" 
wrote:


Hi All,

I had setup Fuseki with SDB to work against Oracle.

Today, I had observed that the command *DROP SILENT GRAPH *
returned a success code but the graph & the triples remained in the
database.

Can anyone please help me out to address this problem?

Thanks,
Akhilesh












Re: Memory leak in TDB using a single Dataset object

2016-02-04 Thread Jean-Marc Vanel
Sorry for being vague.
The RAM usage is growing, until crashing with an Out Of Memery exception.

AFAIK transactions occur on the same thread started by the Play! framework
and so do not overlap.
About the "pattern of transactions" , I don't know what to answer. I there
was a questionnaire I'd be glad to answer. Also I can instrument the code
if there is some procedure.

It is running with java version "1.8.0_65" , on Ubuntu 15.10 .

The test I'm going to do is to call close() and refresh the Dataset when
reaching 80% of the maximum memory .


2016-02-03 23:05 GMT+01:00 Andy Seaborne :

> Hi there -
>
> "memory leak" has possible several meaning, not sure which you you mean:
>
> * RAM usage is growing?
> * Disk usage is growing?
> * a specific file (the journal is growing)?
>
> What is the pattern of transactions? (how many, do they overlap?)
>
> Andy
>
>
> On 03/02/16 17:47, Jean-Marc Vanel wrote:
>
>> I forgot to mention that I'm still using Jena 2.13.0 , due to Banana-RDF
>> not having updated.
>>
>>
>> 2016-02-03 18:43 GMT+01:00 Jean-Marc Vanel :
>>
>> I think that the second pattern "create a dataset object on the thread",
>>> or rather in my case
>>> "create a dataset object for one HTTP request"
>>> is worth trying.
>>>
>>> And I want to know why the doc seems to prefer the first pattern.
>>>
>>> 2016-02-03 18:30 GMT+01:00 A. Soroka :
>>>
>>> On Feb 3, 2016, at 5:13 AM, Jean-Marc Vanel 
>
 wrote:

>
> In the documentation,
>
>
>
 https://jena.apache.org/documentation/tdb/tdb_transactions.html#multi-threaded-use

>
> it is not clear which use pattern is preferred and the reason why.
>

 The first pattern shows a single dataset object being shared between
 threads, each of which operates a transaction against that object, and
 the
 second pattern is introduced with "or create a dataset object on the
 thread
 (the case above is preferred):”.

 As to why, I am not familiar enough with TDB to be sure, but there is a
 comment on the second pattern "Each thread has a separate dataset
 object;
 these safely share the same storage but have independent transactions.”
 that would seem to indicate that the second pattern is vulnerable to
 having
 conflicts between transactions opened against the two different dataset
 objects.

 ---
 A. Soroka
 The University of Virginia Library


 On Feb 3, 2016, at 5:13 AM, Jean-Marc Vanel 
>
 wrote:

>
> I have a repeating memory leak in TDB in my web application (
>
>
 https://github.com/jmvanel/semantic_forms/blob/master/scala/forms_play/README.md

> ).
> It is caching RDF documents from internet, typically dbpedia
> ressources.
>
> It is not the use case described in "Fuseki/TDB memory leak for
>
 concurrent

> updates/queries" https://issues.apache.org/jira/browse/JENA-689 , as
>
 the

> journal is empty after crash .
>
> A single Dataset object is used for the duration of the application,
>
 and I

> suspect this is the root cause.
> In the documentation,
>
>
>
 https://jena.apache.org/documentation/tdb/tdb_transactions.html#multi-threaded-use

>
> it is not clear which use pattern is preferred and the reason why.
>
> You someone confirm that keeping a single Dataset object for the
>
 duration

> of the application is bad ?
>



>>>
>>> --
>>> Jean-Marc Vanel
>>> Déductions SARL - Consulting, services, training,
>>> Rule-based programming, Semantic Web
>>> http://deductions-software.com/
>>> +33 (0)6 89 16 29 52
>>> Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui
>>>
>>>
>>
>>
>>
>


-- 
Jean-Marc Vanel
Déductions SARL - Consulting, services, training,
Rule-based programming, Semantic Web
http://deductions-software.com/
+33 (0)6 89 16 29 52
Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui


Re: Unable to drop graph (Fuseki + SDB)

2016-02-04 Thread Bangalore Akhilesh
Hi Rob,

The response remained the same even with DROP GRAPH .

Below are the sequence of requests that were issued:

Step 1:



Request

Response

POST http://localhost:8080/fuseki/oracle/update

Accept: application/sparql-results+json

Content-Type: application/sparql-update



insert data{

 graph {







  }

}



Status 204 No Content





Step 2:



Request

Response

POST http://localhost:8080/fuseki/oracle/query

Accept: application/sparql-results+json

Content-Type: application/sparql-query



select ?g ?s ?p ?o

{

  graph ?g

  {

?s ?p ?o

  }

}



Status 200 OK



{

  "head": {

"vars": [

  "g",

  "s",

  "p",

  "o"

]

  },

  "results": {

"bindings": [

  {

"g": {

  "type": "uri",

  "value": "urn:providers:search:google"

},

"s": {

  "type": "uri",

  "value": "http://www.google.com;

},

"p": {

  "type": "uri",

  "value": "http://www.google.com#tab;

},

"o": {

  "type": "uri",

  "value": "http://www.google.com/images;

}

  }

]

  }

}





Step 3:



Request

Response

POST http://localhost:8080/fuseki/oracle/update

Accept: application/sparql-results+json

Content-Type: application/sparql-update



drop graph 



Status 204 No Content





Step 4:



Request

Response

POST http://localhost:8080/fuseki/oracle/query

Accept: application/sparql-results+json

Content-Type: application/sparql-query



select ?g ?s ?p ?o

{

  graph ?g

  {

?s ?p ?o

  }

}



Status 200 OK



{

  "head": {

"vars": [

  "g",

  "s",

  "p",

  "o"

]

  },

  "results": {

"bindings": [

  {

"g": {

  "type": "uri",

  "value": "urn:providers:search:google"

},

"s": {

  "type": "uri",

  "value": "http://www.google.com;

},

"p": {

  "type": "uri",

  "value": "http://www.google.com#tab;

},

"o": {

  "type": "uri",

  "value": "http://www.google.com/images;

}

  }

]

  }

}

As you can see, the graph is still available!

I have also tried the below request but the graph still remained.

Request

Response

DELETE
http://localhost:8080/fuseki/oracle/graph?graph=urn:providers:search:google



Status 204 No Content



Thanks,
Akhilesh

On Tue, Feb 2, 2016 at 7:03 PM, Rob Vesse  wrote:

> How do you verify that the graph is still present?
>
> Also what happens if you run DROP GRAPH ?
>
> The SILENT keyword allows for an operation to fail but to ignore the
> failure and return success so if something is going wrong removing the
> SILENT keyword allows for the error to be propagated.
>
> Rob
>
> On 02/02/2016 12:59, "Bangalore Akhilesh" 
> wrote:
>
> >Hi All,
> >
> >I had setup Fuseki with SDB to work against Oracle.
> >
> >Today, I had observed that the command *DROP SILENT GRAPH *
> >returned a success code but the graph & the triples remained in the
> >database.
> >
> >Can anyone please help me out to address this problem?
> >
> >Thanks,
> >Akhilesh
>
>
>
>
>