On Fri, Sep 30, 2022 at 5:06 PM Andy Seaborne wrote:
>
> On 30/09/2022 12:39, Martynas Jusevičius wrote:
> > On Fri, Sep 30, 2022 at 11:40 AM Andy Seaborne wrote:
> >>
> >> Depends on what "runs out of memory means".
> >
> > MEM% in docker stats reaches 100% and the container quits. Inspecting
>
On 30/09/2022 12:39, Martynas Jusevičius wrote:
On Fri, Sep 30, 2022 at 11:40 AM Andy Seaborne wrote:
Depends on what "runs out of memory means".
MEM% in docker stats reaches 100% and the container quits. Inspecting
it then shows OOMKilled: true.
So you have a 10G container that crashes
On Fri, Sep 30, 2022 at 11:40 AM Andy Seaborne wrote:
>
> Depends on what "runs out of memory means".
MEM% in docker stats reaches 100% and the container quits. Inspecting
it then shows OOMKilled: true.
>
> If the container is being killed by the host, then it is likely some
> process in the
Depends on what "runs out of memory means".
If the container is being killed by the host, then it is likely some
process in the container is asking for memory (sbrk), and there is a
container limit (or ulimit?) being exceeded, so the container runtime or
the host kills the container.
One
From my understanding, a larger heap space for Fuseki should only be
necessary when doing reasoning or e.g. loading the geospatial index. A
TDB database on the other hand is backed by memory mapped files, i.e.
makes use of off-heap memory and let the OS do all the work.
Indeed, I cannot
Still hasn't crashed, so less heap could be the solution in this case.
On Thu, Sep 29, 2022 at 3:12 PM Martynas Jusevičius
wrote:
>
> I've lowered the heap size to 4GB to leave more off-heap memory (6GB).
> It's been an hour and OOMKilled hasn't happened yet unlike before.
> MEM% in docker stats
I've lowered the heap size to 4GB to leave more off-heap memory (6GB).
It's been an hour and OOMKilled hasn't happened yet unlike before.
MEM% in docker stats peaks around 70%.
On Thu, Sep 29, 2022 at 12:41 PM Martynas Jusevičius
wrote:
>
> OK the findings are weird so far...
>
> Under constant
OK the findings are weird so far...
Under constant query load on my local Docker, MEM% of the Fuseki
container reached 100% within 45 minutes and it got OOMKilled.
However, the Used heap "teeth" in VisualVM were below 3GB of the total
~8GB Heap size the whole time.
What does that tell us?
On
Hi Eugen,
I have the debugger working, I was trying to connect the profiler :)
Finally I managed to connect from VisualVM on Windows thanks to this
answer:
https://stackoverflow.com/questions/66222727/how-to-connect-to-jmx-server-running-inside-wsl2/71881475#71881475
I've launched an infinite
For debugging, you need to do the following:
* pass JVM options to enable debugging
* expose docker port for JVM debug you chose
https://stackoverflow.com/questions/138511/what-are-java-command-line-options-to-set-to-allow-jvm-to-be-remotely-debugged
You should be able to do all this without
On Thu, Sep 29, 2022 at 9:41 AM Lorenz Buehmann
wrote:
>
> You're working on an in-memory dataset?
No the datasets are TDB2-backed
> Does it also happen with Jena 4.6.1?
Don't know :)
I wanted to run a profiler and tried connecting from VisualVM on
Windows to the Fuseki container but neither
We experienced fuseki crashes as well.
For us we had LARGE data sets and had some reasoning rules.
Also lots of updates increase disk size as well.
I hope it helps,
Eugen
On 29.09.2022 10:40, Lorenz Buehmann wrote:
You're working on an in-memory dataset? Does it also happen with Jena
You're working on an in-memory dataset? Does it also happen with Jena 4.6.1?
On 28.09.22 20:23, Martynas Jusevičius wrote:
Hi,
We have a dockerized Fuseki 4.5.0 instance that is gradually running
out of memory over the course of a few hours.
3 datasets, none larger than 10 triples. The
Hi,
We have a dockerized Fuseki 4.5.0 instance that is gradually running
out of memory over the course of a few hours.
3 datasets, none larger than 10 triples. The load is negligible
(maybe a few bursts x 10 simple queries per minute), no updates.
Dockerfile:
14 matches
Mail list logo