On 11/08/16 13:55, Andy Seaborne wrote:
On 11/08/16 09:36, Mikael Pesonen wrote:

Hi Andy,

I did some long lasting queries at Fuseki at same time, but dont
remember if that was exactly at the time of error.

Are there some rules whats allowed and not with simultaneous reads and
writes?

Rules is too strong but the system has to have moments when it can
consolidate commits that are only in the journal and not in the main
database.

Part of that is keeping the wrapper layer around, which is what leads to
the massive stack in extreme cases (it hasn't been reported much before).

For the design, the update rate exceeds capacity and ideally some sort
of control on this would be good.  i.e. make the writer force the system
into a quiet state and flush the journal if the journal exceed some
threshold like 100 (your stack trace is 336 layers).

At that point the system is going to judder - all incoming requests held
up, including readers.  let the current readers finish then flush the
journal then go back to the normal multiple-reader and single writer mode.

https://issues.apache.org/jira/browse/JENA-1224

Done - there is now a limit of 250 on the

(250 is a bit large and is an arbitrary guess - it is less than you show where).

TransactionManager.MaxQueueThreshold

        Andy


For your application, for the released version:

Im inserting in a loop max 100 triplets at a time with bin/s-update

Could you instead accumulate them into large units?  This effect is on
the number of transactions, not their size.  Otherwise, control the long
running queries.

Either a large update or concatenate the SPARQL operations into a large
request? Operations are separated by ";"

INSERT DATA { .... }
;
# You can set PREFIXes here.
INSERT DATA { .... }
;

...

    Andy

For people looking at the future possibilities:

TDB2 does not have this issue. Writers write their changes and that's
it.  TDB2 is more like a journal-only system - it uses 2 append-only
files per B+Tree, and a small state file (24 bytes: tree root, 2 limits
on the blocks used).


-Mikael


On 10.8.2016 20:02, Andy Seaborne wrote:
Hi Mikael,

It looks like the write transactions aren't able to trigger wring the
journal back to the database. It needs a point in time where there are
no other transactions happening.

Is there a long running read transaction? Or many small ones? going on
at the same time?  Or is one write transaction happening at the same
time as another?

    Andy

On 10/08/16 14:57, Mikael Pesonen wrote:

Hi,

I'm inserting data to jena store and got this exception. Server is:

 /usr/bin/java -Xmx3600M -jar
/home/text/tools/apache-jena-fuseki-2.3.1/fuseki-server.jar --update
--port 3030

Im inserting in a loop max 100 triplets at a time with bin/s-update.
Error occured after a few 1000 insertions.


[2016-08-10 16:29:41] Fuseki     INFO  [78957] POST
http://semantic-dev.lingsoft.fi:3030/ds/update
[2016-08-10 16:29:41] Fuseki     INFO  [78957] POST /ds :: 'update' ::
[application/sparql-update] ?
java.lang.StackOverflowError


org.apache.jena.tdb.transaction.NodeTableTrans.getNodeIdForNode(NodeTableTrans.java:98)



    at
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:48)



    at
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:59)



    at
org.apache.jena.tdb.transaction.NodeTableTrans.getNodeIdForNode(NodeTableTrans.java:98)



    at
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:48)



    at
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:59)



    at
org.apache.jena.tdb.transaction.NodeTableTrans.getNodeIdForNode(NodeTableTrans.java:98)



    at
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:48)



[2016-08-10 16:29:41] Fuseki     INFO  [78957] 500 Server Error (33 ms)

Br,
Mikael





Reply via email to