Hi

See below

> On Aug 22, 2016, at 8:52 AM, DLopez <d.lope...@gmail.com> wrote:
> 
> Hi Denis,
> The exception was happening to us in a cluster where all the nodes have the 
> same configuration, they use the same piece of code to setup the cluster, and 
> happened without restarting any node at all.

Please refer to my latest answer to Binti regarding this exception.

> Regarding the first issue, one node being slow due to GC issues would slow 
> down the whole cluster even if just reading cache values.
> 
> Regarding our solution, we really just needed a set of replicated caches with 
> periodic updates and just 1-2 caches updated more frequently but we didn't 
> need up-to-the-ms updates or grid capability, so we created our own custom 
> solution skipping most of the problems that make this kind of sofware hard 
> and we did not really need. In fact, Ignite was the 6th product that we 
> discarded after implementation, with a couple more discareded even sooner. It 
> seems the industry is now more focused on the grid part so we have been 
> unable to find a good fit for our use case.
> 
Got you. It’s pity. Hope that we will get to the bottom of the issue with the 
help from Binti side.

—
Denis

> D.
> 
> 2016-08-21 7:42 GMT+02:00 Denis Magda [via Apache Ignite Users] <[hidden 
> email] <x-msg://7/user/SendEmail.jtp?type=node&node=7211&i=0>>:
> This exception usually happens when a cluster, where a cache had been 
> initially deployed, was restarted without the cache recreation  and after a 
> reconnection to the cluster a client node tries to access the cache that is 
> not longer exists. This exception can be easily avoided if all the caches are 
> defined in server nodes configuration, that is used during startup, or if a 
> client creates a cache one more time after the reconnection.
> 
> BTW, what kind of IMDG did you switch to?
> 
> —
> Denis
>   
>> On Aug 20, 2016, at 1:28 AM, DLopez <[hidden email] 
>> <http://user/SendEmail.jtp?type=node&node=7192&i=0>> wrote:
>> 
>> Hi Binti,
>> No, we were unable to resolve it. We were also experiencing quite frequently 
>> from another known issue "IllegalStateException: Cache has been closed or 
>> destroyed: cache" so we decided to move on and drop Ignite altogether. We 
>> finished the transition in production a couple of weeks ago.
>> I'm sorry I can't be of much help,
>> D.
>> 
>> 2016-08-20 1:31 GMT+02:00 bintisepaha [via Apache Ignite Users] <<a 
>> href="x-msg://7/user/ 
>> <x-msg://7/user/SendEmail.jtp?type=node&amp;node=7188&amp;i=0>SendEmail.jtp?type=node&amp;
>>  
>> <x-msg://7/user/SendEmail.jtp?type=node&amp;node=7188&amp;i=0>node=7188&amp;i=0
>>  <x-msg://7/user/SendEmail.jtp?type=node&amp;node=7188&amp;i=0>" 
>> target="_top" rel="nofollow" link="external" class="">[hidden email]>:
>> Hi, we are seeing similar issues too with ignite 1.5.0. Were you guys able 
>> to resolve it? we use distributed caches in full-sync mode and also a few 
>> replicated caches. 
>> 
>> Thanks, 
>> Binti 
>> 
>> If you reply to this email, your message will be added to the discussion 
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p7183.html
>>  
>> <http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p7183.html>
>> To unsubscribe from One failing node stalling the whole cluster, click here 
>> <>.
>> NAML 
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>> 
>> View this message in context: Re: One failing node stalling the whole 
>> cluster 
>> <http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p7188.html>
>> Sent from the Apache Ignite Users mailing list archive 
>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com 
>> <http://nabble.com/>.
> 
> 
> 
> If you reply to this email, your message will be added to the discussion 
> below:
> http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p7192.html
>  
> <http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p7192.html>
> To unsubscribe from One failing node stalling the whole cluster, click here 
> <applewebdata://07AF829C-CBE7-47E1-8877-3CA124A6D355>.
> NAML 
> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
> 
> View this message in context: Re: One failing node stalling the whole cluster 
> <http://apache-ignite-users.70518.x6.nabble.com/One-failing-node-stalling-the-whole-cluster-tp5372p7211.html>
> Sent from the Apache Ignite Users mailing list archive 
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com 
> <http://nabble.com/>.

Reply via email to