You need enough heap to retain all your keys in memory.  Values begin to be 
evicted at the EVICTION threshold.  At the CRITICAL threshold further writes 
are blocked.  Your GC settings should be tuned to match these thresholds 
(search the mailing list archives and/or wiki for some advice on GC tuning).

Anthony

> On Jun 17, 2016, at 6:42 AM, Avinash Dongre <dongre.avin...@gmail.com> wrote:
> 
> Thanks Anthony,
> 
> I know that my heap is not sufficient for the data I want to put.
> but My Region is configured as PARTITION_OVERFLOW, so I was hoping that
> once Geode reaches to critical heap , it will start overflowing
> the data.
> 
> Is this assumption correct ?
> 
> thanks
> Avinash
> 
> 
> On Thu, Jun 16, 2016 at 8:21 PM, Anthony Baker <aba...@pivotal.io> wrote:
> 
>> Hi Avinash,
>> 
>> The question to answer is “Why was a member removed from the cluster?”
>> Some things to investigate:
>> 
>> - Insufficient heap for the data volume
>> - Excessive GC causing the member to be unresponsive
>> - OutOfMemory errors in the log
>> - Overloaded CPU causing delayed heartbeats responses
>> 
>> HTH,
>> Anthony
>> 
>>> On Jun 16, 2016, at 6:48 AM, Avinash Dongre <dongre.avin...@gmail.com>
>> wrote:
>>> 
>>> Hello All,
>>> 
>>> I am getting following exception when I try to load my system with large
>>> amount of data.
>>> 
>>> My Setup Details:
>>> 1 locator, 3 cacheservers with 8g and all the regions are disk
>> persistence
>>> enabled. ( All this is running on single AWS cluster node )
>>> 
>>> Please give me some clues what I am missing here ?
>>> 
>>> 
>>> [severe 2016/06/16 12:51:15.552 UTC S1 <Notification Handler> tid=0x40]
>>> Uncaught exception in thread Thread[Notification
>>> Handler,10,ResourceListenerInvokerThreadGroup]
>>> com.gemstone.gemfire.distributed.DistributedSystemDisconnectedException:
>>> DistributedSystem is shutting down, caused by
>>> com.gemstone.gemfire.ForcedDisconnectException: Member isn't responding
>> to
>>> heartbeat requests
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.directChannelSend(GMSMembershipManager.java:1719)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.send(GMSMembershipManager.java:1897)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.DistributionChannel.send(DistributionChannel.java:87)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.DistributionManager.sendOutgoing(DistributionManager.java:3427)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.DistributionManager.sendMessage(DistributionManager.java:3468)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.DistributionManager.putOutgoing(DistributionManager.java:1828)
>>>       at
>>> 
>> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor$ResourceProfileMessage.send(ResourceAdvisor.java:185)
>>>       at
>>> 
>> com.gemstone.gemfire.internal.cache.control.ResourceAdvisor.updateRemoteProfile(ResourceAdvisor.java:448)
>>>       at
>>> 
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.processLocalEvent(HeapMemoryMonitor.java:677)
>>>       at
>>> 
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:485)
>>>       at
>>> 
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor.updateStateAndSendEvent(HeapMemoryMonitor.java:448)
>>>       at
>>> 
>> com.gemstone.gemfire.internal.cache.control.HeapMemoryMonitor$2.run(HeapMemoryMonitor.java:718)
>>>       at
>>> 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>       at
>>> 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>       at java.lang.Thread.run(Thread.java:745)
>>> Caused by: com.gemstone.gemfire.ForcedDisconnectException: Member isn't
>>> responding to heartbeat requests
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.mgr.GMSMembershipManager.forceDisconnect(GMSMembershipManager.java:2551)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.forceDisconnect(GMSJoinLeave.java:885)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processRemoveRequest(GMSJoinLeave.java:578)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.membership.GMSJoinLeave.processMessage(GMSJoinLeave.java:1540)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.JGroupsMessenger$JGroupsReceiver.receive(JGroupsMessenger.java:1061)
>>>       at org.jgroups.JChannel.invokeCallback(JChannel.java:816)
>>>       at org.jgroups.JChannel.up(JChannel.java:741)
>>>       at org.jgroups.stack.ProtocolStack.up(ProtocolStack.java:1030)
>>>       at org.jgroups.protocols.FRAG2.up(FRAG2.java:165)
>>>       at org.jgroups.protocols.FlowControl.up(FlowControl.java:392)
>>>       at
>> org.jgroups.protocols.UNICAST3.deliverMessage(UNICAST3.java:1064)
>>>       at
>>> org.jgroups.protocols.UNICAST3.handleDataReceived(UNICAST3.java:779)
>>>       at org.jgroups.protocols.UNICAST3.up(UNICAST3.java:426)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.StatRecorder.up(StatRecorder.java:69)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.AddressManager.up(AddressManager.java:74)
>>>       at org.jgroups.protocols.TP.passMessageUp(TP.java:1567)
>>>       at org.jgroups.protocols.TP
>> $SingleMessageHandler.run(TP.java:1783)
>>>       at org.jgroups.util.DirectExecutor.execute(DirectExecutor.java:10)
>>>       at org.jgroups.protocols.TP.handleSingleMessage(TP.java:1695)
>>>       at org.jgroups.protocols.TP.receive(TP.java:1620)
>>>       at
>>> 
>> com.gemstone.gemfire.distributed.internal.membership.gms.messenger.Transport.receive(Transport.java:158)
>>>       at org.jgroups.protocols.UDP$PacketReceiver.run(UDP.java:701)
>>>       ... 1 more
>> 
>> 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

Reply via email to