At this point I've spent enough time on this problem and can move on with my
project without using @QueryTextField--I'm just letting anyone who's
concerned know what I've seen in case you want to probe into this issue any
further.
I've taken the time to write a reproducer that can be easily run
Hi,
Lucene indexes are stored in the heap, while I see that in reproducer
you've limited heap size to 1gb. Are you sure that you used these JVM opts?
Can you please share logs from your run, so I can check the heap usage?
Best Regards,
Evgenii
вт, 30 апр. 2019 г. в 00:23, kellan :
> The issue
The issue seems to be with the @QueryTextField annotation. Unless Lucene
indexes are supposed to be eating up all this memory, in which case it might
be worth improving your documentation.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Here is a reproducible example of the DataStreamer memory leak:
https://github.com/kellanburket/ignite-leak
I've also added a public image to DockerHub: miraco/ignite:leak
This can be run on a machine with at least 22GB of memory available to
Docker and probably 50GB of storage between WAL and
Can you share your full configuration (Ignite config and JVM options) and
the server logs of Ignite?
Which version of Ignite you use?
Can you confirm that on this version and configuration simply disabling
Ignite persistence removes the problem?
If yes, can you try running with walMode=NONE? It
Any suggestions from where I can go from here? I'd like to find a way to
isolate this problem before I have to look into another storage/grid
solutions. A lot of work has gone into integrating Ignite into our platform,
and I'd really hate to start from scratch. I can provide as much information
as
No luck with the changed configuration. Memory still continues to rise until
the Kubernetes limit (110GB), then crashes. This is output I pulled from
jcmd at some point before the crash. I can post the detailed memory report
if that helps.
Total: reserved=84645150KB, committed=83359362KB
-
I've put a full answer on SO -
https://stackoverflow.com/questions/55752357/possible-memory-leak-in-ignite-datastreamer/55786023#55786023
.
In short, so far it doesn't look like a memory leak to me - just a
misconfiguration.
There is a memory pool in JVM for direct memory buffers which is by
Hello,
Copying Evgeniy and Stan, our community experts who'd guide you through. In
the meantime, please try to capture the OOM with this approach:
https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
-
Denis
On Sun, Apr 21, 2019 at 8:49 AM kellan wrote:
>
Update: I've been able to confirm a couple more details:
1. I'm experiencing the same leak with put, putAll as I am with the
DataStreamer
2. The problem is resolved when persistence is turned off
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Looping in the dev list.
Community does it remind you any memory leak addressed in the master? What
do we need to get down to the issue.
Denis
On Friday, April 19, 2019, kellan wrote:
> After doing additional tests to isolate the issue, it looks like Ignite is
> having a problem releasing
After doing additional tests to isolate the issue, it looks like Ignite is
having a problem releasing Internal memory of cache objects passed into the
NIO ByteBuffers that back the DataStreamer objects. At first I thought this
might be on account of my Avro's ByteBuffers that get transformed into
So I've done a heap dump and recorded heap metrics while running my
DataStreamers and the heap doesn't appear to be the problem here. Ignite
operates normally for several hours without the heap size ever reaching its
max. My durable memory also seems to be behaving as expected. While looking
at
A heap dump won't address non-heap memory issues, which is what I'm most
often running into. Where are places that memory build up can take place
with Ignite that is not in Durable Memory or Heap Memory?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello!
I suggest collecting a heap dump and taking a long look towards it.
Regards,
--
Ilya Kasnacheev
пн, 15 апр. 2019 г. в 15:35, kellan :
> I'm confused. If the DataStreamer blocks until all data is loaded into
> remote
> caches and I'm only ever running a fixed number of DataStreamers (4
I'm confused. If the DataStreamer blocks until all data is loaded into remote
caches and I'm only ever running a fixed number of DataStreamers (4 max),
which close after they read a single file of a more or less fixed length
each time (no more than 200MB; e.g. I shouldn't have more than 800MB +
Hello!
DataStreamer WILL block until all data is loaded in caches.
The recommendation here is probably reducing perNodeParallelOperations(),
streamerBufferSize() and perThreadBufferSize(), and flush()ing your
DataStreamer frequently to avoid data build-ups in temporary data
structures of
17 matches
Mail list logo