Can you give us your actual Fuseki config (i.e. assembler file)? Or are you
repeatedly creating new datasets via the admin API?
---
A. Soroka
The University of Virginia Library
> On Jan 6, 2017, at 10:43 AM, Janda, Radim wrote:
>
> Hello,
> we use in-memory datasets.
Hello,
we use in-memory datasets.
JVM is big enough but as we process thousands of small data sets the memory
is allocated continuously.
Actualy we restart Fuseki every hour to avoid out of memory error.
However the performance is also decreasing in time (before restart) that's
why we are looking
Are you using persistent or an in-memory datasets for your working storage?
If you really mean memory (RAM), are you sure the JVM is big enough?
Fuseki tries to avoid holding on to cache transactions but if the server
is under heavy read requests (Rob's point) then it can build up
(solution -
Deleting data does not reclaim all the memory, exactly what is and isn’t
reclaimed depends somewhat on your exact usage pattern.
The B+Tree’s which are the primary data structure for TDB, the default database
used in Fuseki, does not reclaim the space. It is potentially subject
fragmentation
Hello Lorenz,
yes I meant delete data from Fuseki using DELETE command.
We have version 2.4 installed.
We use two types of queries:
1. Insert new triples based on existing triples rdf model (insert sparql)
2. Find some results in the data (select sparql)
Thanks
Radim
On Fri, Jan 6, 2017 at 1:04
Hello Radim,
just to avoid confusion, with "Delete whole Fuseki" you mean the data
loaded into Fuseki, right?
Which Fuseki version do you use?
What kind of transformation do you do? I'm asking because I'm wondering
if it's necessary to use Fuseki.
Cheers,
Lorenz
> Hello,
> We use Jena
Hello,
We use Jena Fuseki to process a lot of small data sets.
It works in the following way:
1. Delete whole Fuseki (using DELETE command)
2. Load data to Fuseki (using INSERT)
3. Tranform data and create output (sparql called from Python)
4. ad 1)2)3 delete Fuseki and Transform another