Comment interleaved.

2016-02-04 10:26 GMT+01:00 Andy Seaborne <[email protected]>:

> On 04/02/16 08:15, Jean-Marc Vanel wrote:
>
>> Sorry for being vague.
>> The RAM usage is growing, until crashing with an Out Of Memery exception.
>>
>
> TDB uses a bounded amount of caching, though the journal workspace can
> grow.
>

So, if TDB uses a bounded amount of caching *in memory* , there is nothing
against using the same Dataset object as singleton.
The journal workspace you mention is on disk , isn't it ? My problem is not
on disk at all.


> If there are lots of large literals, you'll need more heap.
>

 The largest literals involved in the transactions are dbPedia abstracts; I
would not call that "large" literals.
Anyway the problem happens sooner or later, raising the memory does not
help.


The transaction system in TDB1 keeps up to 10 transaction buffered : you
> can switch that off with:
>
>    TransactionManager.QueueBatchSize = 0 ;
>
> then commits are flushed back to the main database as soon as possible.
> That needs no readers about.
>

I'll try that too, and report .


> If you have a reader that doesn't commit/end properly, the system can
> never write to the main database.
>

That should not be possible, when I use a Scala construct that wraps the
fragment of code inside a transaction and automatically calls commit or end.

However, there some reads on the database that happen outside a transaction.
This covers navigation by find() on a <urn:x-arq:UnionGraph> graph .
During developments, everytime a runtime exception said "outside a
transaction" , I fixed that. So the other cases were left outside a
transaction; it is wrong ?

...

It is a disk-backed dataset, not an TDB memory one?
>

Yes disk based.

-- 
Jean-Marc Vanel
Déductions SARL - Consulting, services, training,
Rule-based programming, Semantic Web
http://deductions-software.com/
+33 (0)6 89 16 29 52
Twitter: @jmvanel , @jmvanel_fr ; chat: irc://irc.freenode.net#eulergui

Reply via email to