Hi Assaf

Maybe, you met  tombstone mechanism.

And you can find some words about  tombstone on  the following link:
http://geode.apache.org/docs/guide/developing/distributed_regions/how_region_versioning_works.html

<How Destroy and Clear Operations Are Resolved>

"When consistency checking is enabled for a region, a Geode member does not
immediately remove an entry from the region when an application destroys
the entry. Instead, the member retains the entry with its current version
stamp for a period of time in order to detect possible conflicts with
operations that have occurred. The retained entry is referred to as a
tombstone. "

Thanks and regards

Gang Yan(闫钢)
Sr. Solution Architect
Customer Success, Pivotal Great China
Mobile:+86 13636347460
g...@pivotal.io

2016-12-14 6:31 GMT+08:00 Udo Kohlmeyer <ukohlme...@pivotal.io>:

> The way overflow works is when the eviction-threshold is reached, the
> system will ask each region with overflow enabled to start overflowing data
> to disk.
>
> Whilst you have overflow enabled please set the previously mentioned ‑XX:
> CMSInitiatingOccupancyFraction=X and -XX:+UseCMSInitiatingOccupancyOnly.
> Make the InitiatingOccupancyFraction to be a few percent lower than the
> eviction-threshold. Then the GC will kick in clean before having to
> potentially overflow to disk. Also, once overflowed to disk, the gc will
> try and clean up any garbage left behind.
>
> Hope this helps...
>
> --Udo
>
> On 12/13/16 14:22, assaf_waiz...@amat.com wrote:
>
> Thanks Eric,
>
> I mistakenly thought that offheap is overflow...
> I'll test the offheap for my scenario, hope it will perform well on my
> data sizes and variance.
> One more thing I would like to ask- I saw that the overflow feature can be
> configured per region, does it mean that it also checks internally the mem
> usage for the region (not by querying the heap). If so it might be a
> possible workaround for me too.
>
> Once again, I really appreciate your help.
>
> On 14 Dec 2016, at 0:05, Charlie Black <cbl...@pivotal.io> wrote:
>
> It feels like Java hasn't hit the threshold when the GC would kick in.
>
> From a Geode perspective if the entry count is zero all of the references
> to the objects stored in Geode should be removed and are waiting to be
> reclaimed by Java.
>
> If you would like to force a GC try the gfsh command "gc".   GC will
> attempt to run the Java Garbage Collector.   Here is an example run:
>
> gfsh>gc
>
> GC Summary
>
>
>         Member ID/Name          | HeapSize (MB) Before GC | HeapSize(MB)
> After GC | Time Taken for GC in ms
>
> ------------------------------- | ----------------------- |
> --------------------- | -----------------------
>
> 192.168.5.1(foo:23089)<v0>:9696 | 189                     | 142
>         | 84
>
> On Tue, Dec 13, 2016 at 2:40 PM <assaf_waiz...@amat.com> wrote:
>
>> Indeed the eviction/offheap features or persistency can workaround this,
>> I considered the offheap but the problem for my scenario is that upon
>> reaching the threshold (which is tested by checking heap status) the
>> offheap will take place in I will have throughput hit although I have free
>> space in RAM (the server elements usage are less than threshold after some
>> keys removal, but since offheap is based on querying the heap new puts will
>> go offheap).
>>
>>
>>
>> My system requirements can’t afford working with disk while having free
>> RAM, it will impact it dramatically.
>>
>>
>>
>> Please share if you have more ideas to overcome this.
>>
>>
>>
>> Thanks a lot.
>>
>>
>>
>> *From:* Eric Shu [mailto:e...@pivotal.io]
>> *Sent:* יום ג 13 דצמבר 2016 23:27
>>
>>
>> *To:* user@geode.apache.org
>> *Subject:* Re: Geode memory handling issues
>>
>>
>>
>> Others might chime in as well. I think most users set eviction if they
>> know there will be a memory issue. Eviction is kicked in before the
>> critical heap setting (Low Memory Exception) with correct settings.
>>
>>
>>
>> Also, if it is partitioned region, you can add capacity by adding more
>> nodes so that system will be able to proceed. Users may also shut down
>> nodes and restart them again -- with persistence enabled. The data will be
>> recovered from disc.
>>
>>
>>
>> Regards,
>>
>> Eric
>>
>>
>>
>> On Tue, Dec 13, 2016 at 1:16 PM, <assaf_waiz...@amat.com> wrote:
>>
>> Thanks Dan & Jared,
>>
>> I carefully read the documentation for the resource management and
>> noticed that part that tells the interaction between it and the GC.
>> I saw that my server is working with ConcMarkSweep and set the
>> InitiatingOccupancyFraction to lower percentage but with no help, GC just
>> won’t kick in. I also tried the G1 GC with now help. (also, just to make
>> sure that is the cause, I attached to the jvm using yourkit and initiate GC
>> manually and indeed the server was able to get put requests again)
>>
>> For my experience with JVM memory management, counting on GC collection
>> can be problematic as you can't really control the collection timing, in
>> addition invoking it directly (System.gc) is stated as bad practice. I
>> would expect such resource manager to track its elements usage directly and
>> not by querying the heap status.
>>
>> This mem usage issue seems to be very basic, I mean how come GemFire
>> users (which I know runs in production) are not facing this problem?
>>
>> I really appreciate your help guys.
>>
>> -----Original Message-----
>> From: Dan Smith [mailto:dsm...@pivotal.io]
>> Sent: יום ג 13 דצמבר 2016 21:44
>> To: user@geode.apache.org
>> Subject: Re: Geode memory handling issues
>>
>> Hi Assaf,
>>
>> +1 for that link Jared send out. In order to the resource manager to
>> work, you need to be using ConcMarkSweep with an
>> InitiatingOccupancyFraction that's less than your critical and eviction
>> heap thresholds. That will cause GC to kick in if your heap is above those
>> thresholds.
>>
>> -Dan
>>
>> On Tue, Dec 13, 2016 at 11:36 AM, Jared Stewart <jstew...@pivotal.io>
>> wrote:
>> > Hi Assaf,
>> >
>> > There is some information about tuning the JVM’s garbage collection
>> > parameters to work with eviction available here:
>> > http://geode.apache.org/docs/guide/managing/heap_use/heap_management.h
>> > tml#resource_manager
>> >
>> > Best,
>> > Jared
>> >
>> > On Dec 13, 2016, at 11:26 AM, <assaf_waiz...@amat.com>
>> > <assaf_waiz...@amat.com> wrote:
>> >
>> > Hi Eric,
>> >
>> > Thanks for the quick response!
>> > Shutting down the server is problematic for me. In a real scenario I
>> > won’t remove all items but only some of them, thus the server still
>> > contains data but have free space – still I will get the exception.
>> > Shutting down the server will cause me to lose other entries – I’m
>> > afraid it’s not applicable to me.
>> >
>> > I also understand that I can’t really control GC collection, so how
>> > does the critical-threshold is expected to work? Once you reach it you
>> > can’t add more items to server even if you removed some.
>> >
>> > Thanks.
>> >
>> > From: Eric Shu [mailto:e...@pivotal.io]
>> > Sent: יום ג 13 דצמבר 2016 21:21
>> > To: user@geode.apache.org
>> > Subject: Re: Geode memory handling issues
>> >
>> > I am not sure if GC collection can be controlled. One possible way to
>> > work around this is to shut down the server and restart it.
>> >
>> > Also you may want to try offheap region to cope this issue?
>> >
>> > Regards,
>> > Eric
>> >
>> > On Tue, Dec 13, 2016 at 11:02 AM, <assaf_waiz...@amat.com> wrote:
>> > Hi,
>> >
>> > I am facing some issues with geode memory/resource management.
>> > I have simple environment with a single locator and a single server,
>> > both are launched via gfsh.
>> > The server is started with initial-heap=max-heap=4GBand with
>> > critical-heap-percentage=70%.
>> > I also created a single region of type partition using gfsh.
>> >
>> > In addition, I created a simple client application (client-server
>> > topology, accessing the region via ClientCache PROXY). The client
>> > simply iteratively put data elements (of 100MB each) into the region.
>> > Upon putting total size of ~70% of 4GB, I get exception on client side
>> > which tells the server is working at low memory as expected – so far so
>> good.
>> > While observing the server metrics (using show metrics) I see that the
>> > server holds data entries as expected and the heap usage and total
>> > heap size are fine too.
>> >
>> > Now, I’m removing all elements from the region – the server metrics
>> > after this operation shows that the elements count is 0 (i.e. the
>> > server is empty) but still heap usage is high (probably because GC
>> > didn’t collect freed items). That’s shouldn’t bother me, but the
>> > problem is if I’m trying to put additional elements into the region now
>> I still get the exception on client.
>> > Although the server is empty the client can’t put items into it and
>> > this is very problematic from user point of view.
>> >
>> > I tried to play with GC flags, change GC to G1 but with no success. I
>> > can’t control GC collection – having an idle empty server with no way
>> > to add elements to it.
>> >
>> > What am I missing here? Is there some other configuration I should
>> follow?
>> >
>> > Thanks.
>> >
>> > Assaf Waizman
>> > SW Architect | Process Diagnostics and Control | Applied Materials
>> > 9 Oppenheimer Street, Rehovot, 76705.  Israel.
>> > Office +972.8.948.8661 | Mobile +972.54.80.10.799 | Fax
>> > +972.8.948.8848
>> >
>> >
>>
>>
>>
> --
> ---
> cbl...@pivotal.io | +1.858.480.9722 <(858)%20480-9722>
>
>
>

Reply via email to