Re: Ignit Cache Stopped

2017-02-21 Thread Andrey Gura
I think it is just H2 wrapper for string values. On Tue, Feb 21, 2017 at 8:21 AM, Anil wrote: > Thanks Andrey. > > I see node is down even gc log looks good. I will try to reproduce. > > May I know what is the org.h2.value.ValueString objects in the attached the > screenshot

Re: Ignit Cache Stopped

2017-02-20 Thread Andrey Gura
Anil, No, it doesn't. Only client should left topology in this case. On Mon, Feb 20, 2017 at 3:44 PM, Anil wrote: > Hi Andrey, > > Does client ignite gc impact ignite cluster topology ? > > Thanks > > On 17 February 2017 at 22:56, Andrey Gura wrote: >> >>

Re: Ignit Cache Stopped

2017-02-20 Thread Anil
Hi Andrey, Does client ignite gc impact ignite cluster topology ? Thanks On 17 February 2017 at 22:56, Andrey Gura wrote: > From GC logs at the end of files I see Full GC pauses like this: > > 2017-02-17T04:29:22.118-0800: 21122.643: [Full GC (Allocation Failure) >

Re: Ignit Cache Stopped

2017-02-17 Thread Andrey Gura
>From GC logs at the end of files I see Full GC pauses like this: 2017-02-17T04:29:22.118-0800: 21122.643: [Full GC (Allocation Failure) 10226M->8526M(10G), 26.8952036 secs] [Eden: 0.0B(512.0M)->0.0B(536.0M) Survivors: 0.0B->0.0B Heap: 10226.0M(10.0G)->8526.8M(10.0G)], [Metaspace:

Re: Ignit Cache Stopped

2017-02-17 Thread Anil
Hi Andrey, I checked GClogs and everything looks good. Thanks On 17 February 2017 at 20:45, Andrey Gura wrote: > Anil, > > IGNITE-4003 isn't related with your problem. > > I think that nodes are going out of topology due to long GC pauses. > You can easily check this using

Re: Ignit Cache Stopped

2017-02-17 Thread Andrey Gura
Anil, IGNITE-4003 isn't related with your problem. I think that nodes are going out of topology due to long GC pauses. You can easily check this using GC logs. On Fri, Feb 17, 2017 at 6:04 PM, Anil wrote: > Hi, > > We noticed whenever long running queries fired, nodes are

Ignit Cache Stopped

2017-02-17 Thread Anil
Hi, We noticed whenever long running queries fired, nodes are going out of topology and entire ignite cluster is down. In my case, a filter criteria could get 5L records. So each API request could fetch 250 records. When page number is getting increased the query execution time is high and