On 11/08/16 14:01, Mikael Pesonen wrote:

Ok concatenating looks like the easiest solution for me. Is the update
rate limit related to speed only - would just adding some delay between
insertions solve the issue too?

Maybe not.  It's finding the point where changes can be written back.

That happens when a transaction finishes (Read or Write) and no other transaction is around. Then all commits in the journal are written back in a single operation..

It seems to be the long running reads that cause the hold-up.

If you have

R1 starts.
...
   R2 starts
...
R1 finishes
...
   R2 finishes

that's the same effect on the transaction system as a single query from R1 starts to R2 finishes.

larger writes reduces the number of layers (= number of committed transactions not yet flushed - they are not lost on restart or crash).

        Andy


-Mikael


On 11.8.2016 15:55, Andy Seaborne wrote:
On 11/08/16 09:36, Mikael Pesonen wrote:

Hi Andy,

I did some long lasting queries at Fuseki at same time, but dont
remember if that was exactly at the time of error.

Are there some rules whats allowed and not with simultaneous reads and
writes?

Rules is too strong but the system has to have moments when it can
consolidate commits that are only in the journal and not in the main
database.

Part of that is keeping the wrapper layer around, which is what leads
to the massive stack in extreme cases (it hasn't been reported much
before).

For the design, the update rate exceeds capacity and ideally some sort
of control on this would be good.  i.e. make the writer force the
system into a quiet state and flush the journal if the journal exceed
some threshold like 100 (your stack trace is 336 layers).

At that point the system is going to judder - all incoming requests
held up, including readers.  let the current readers finish then flush
the journal then go back to the normal multiple-reader and single
writer mode.

https://issues.apache.org/jira/browse/JENA-1224

For your application, for the released version:

>>> Im inserting in a loop max 100 triplets at a time with bin/s-update

Could you instead accumulate them into large units?  This effect is on
the number of transactions, not their size.  Otherwise, control the
long running queries.

Either a large update or concatenate the SPARQL operations into a
large request? Operations are separated by ";"

INSERT DATA { .... }
;
# You can set PREFIXes here.
INSERT DATA { .... }
;

...

    Andy

For people looking at the future possibilities:

TDB2 does not have this issue. Writers write their changes and that's
it.  TDB2 is more like a journal-only system - it uses 2 append-only
files per B+Tree, and a small state file (24 bytes: tree root, 2
limits on the blocks used).


-Mikael


On 10.8.2016 20:02, Andy Seaborne wrote:
Hi Mikael,

It looks like the write transactions aren't able to trigger wring the
journal back to the database. It needs a point in time where there are
no other transactions happening.

Is there a long running read transaction? Or many small ones? going on
at the same time?  Or is one write transaction happening at the same
time as another?

    Andy

On 10/08/16 14:57, Mikael Pesonen wrote:

Hi,

I'm inserting data to jena store and got this exception. Server is:

 /usr/bin/java -Xmx3600M -jar
/home/text/tools/apache-jena-fuseki-2.3.1/fuseki-server.jar --update
--port 3030

Im inserting in a loop max 100 triplets at a time with bin/s-update.
Error occured after a few 1000 insertions.


[2016-08-10 16:29:41] Fuseki     INFO  [78957] POST
http://semantic-dev.lingsoft.fi:3030/ds/update
[2016-08-10 16:29:41] Fuseki     INFO  [78957] POST /ds :: 'update' ::
[application/sparql-update] ?
java.lang.StackOverflowError


org.apache.jena.tdb.transaction.NodeTableTrans.getNodeIdForNode(NodeTableTrans.java:98)



    at
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:48)



    at
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:59)



    at
org.apache.jena.tdb.transaction.NodeTableTrans.getNodeIdForNode(NodeTableTrans.java:98)



    at
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:48)



    at
org.apache.jena.tdb.store.nodetable.NodeTableInline.getNodeIdForNode(NodeTableInline.java:59)



    at
org.apache.jena.tdb.transaction.NodeTableTrans.getNodeIdForNode(NodeTableTrans.java:98)



    at
org.apache.jena.tdb.store.nodetable.NodeTableWrapper.getNodeIdForNode(NodeTableWrapper.java:48)



[2016-08-10 16:29:41] Fuseki     INFO  [78957] 500 Server Error (33
ms)

Br,
Mikael






Reply via email to