Hi, I'm looking into an issue where we have an Ignite (2.6) client node doing
a CacheContinuousQuery on a cache full of binary objects, and eventually the
server node gets Out of Memory. Usually the server node is happy to run with
2-3 gig of Heap size, however when this client is running with a
CacheContinuousQuery on a cache, it can go >20gig, until the client is
stopped, and then 20mins later objects are garbage collected on the server
node and it goes back down to 2gig. In heap dumps I see its full of
CacheContinuousQueryEvents.

Some questions:

1) In my client Continuous Query handler code, what happens when an error is
thrown:
/private void handleCacheEvent(CacheEntryEvent<? extends String, ? extends
MyBinaryObject> event) {
     // try to deserialize event but fails with error.
}/

I don't see any exception thrown, will this event stay in the cache as an
unconsumed event? Potentially causing a leak for events that have not been
handled correctly?

2) Why does CacheContinuousQueryEvent keep a reference to 'oldVal'.. i.e the
old value in the cache? This could be causing a problem, as we don't care
about old values in the cache.. can we switch that off? Why isn't that the
default

3) In the method that handles cache events, is it best practice to put the
cache event straight onto a blocking queue to make sure there is no slow
consumer problem? makes sense to me but I don't see it recommended anywhere.
If we don't I can imagine the outbound queue of the server node growing...

thanks for any pointers!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Reply via email to