More information:
About Java and containers and sizing:
Summary - things got better at Java10 - running with Java11 is a good idea.
https://www.docker.com/blog/improved-docker-container-integration-with-java-10/
Andy
On 17/04/2020 10:58, Rob Vesse wrote:
Okay, that's very helpful
So
Okay, that's very helpful
So one thing that jumps out at me looking at that Dockerfile and its associated
entrypoint script is that it starts the JVM without any explicit heap size
settings. When that is done the JVM will pick default heap sizes itself which
normally would be fine. However
Hi all, some answers below to the many questions.
1. This Fuseki instance is based on the image maintained at DockerHub by the
secoresearch account. Copies of the Dockerfile and tdb.cfg files are at the end
of this message. There is no other code involved.
2. The image is deployed to an
The TE said
> In attachment you can find a chart plotting memory use increase against
> dataset size. There is no visible correlation, but on average each additional
> triplet requires upwards of 30 MB of RAM.
but those numbers can't be correct ...
The y axis denotes the memory consumption in
What do we know so far?
1/ 6 datasets, uptime 20Mb each (file size? format? Compressed? Inference?)
(is that datasets or graphs?)
2/ At 1G the system kills processes.
What we don't know:
A/ Heap size
B/ Machine RAM size - TDB uses memory mapped so this matters. It also
means the process
I find the implied figures hard to believe, as Lorenz has said you will need to
share your findings via some other service since this mailing list does not
permit attachments.
Many people use Fuseki and TDB to host datasets in the hundreds of millions (if
not billions) of triples in production
Hi Lorenz,
someone got a picture in a previous message, I wonder if this issue affects
everybody in the same way. In any case here is a link to Pasteboard:
https://pasteboard.co/J43bRYp.png
Regards.
--
Luís
‐‐‐ Original Message ‐‐‐
On Thursday, April 16, 2020 9:40 AM, Lorenz Buehmann
No attachments possible on this mailing list. Use some external service
to share attachments please or try to embed it as image (in case it's
just an image) as you did in your other thread. Or just use Gist
On 16.04.20 09:27, Luís Moreira de Sousa wrote:
> Dear all,
>
> I have been tweaking the
Dear all,
I have been tweaking the tdb.node2nodeid_cache_size and
tdb.nodeid2node_cache_size parameters as Andy suggested. They indeed reduce the
RAM used by Fuseki, but not to a point where it becomes usable. In attachment
you can find a chart plotting memory use increase against dataset size.
On 12/03/2020 11:26, Luís Moreira de Sousa wrote:
Dear all,
I loaded six RDF datasets to Fuseki with sizes comprised between 20 Kb and 20
Mb. To host these six datasets (in persistent mode) Fuseki is using over 1 Gb
of RAM and could soon get killed by the system (running on a container
10 matches
Mail list logo