Hi,
The node just stopped.
In our test the issue is being reproduced whenever the producer continuously 
PUTs into the region, consumer stops GET and stops delete entries from the 
region.
In a while, as long as eviction rate goes up, the exception occurs. We tested 
with GEODE 1.2.
Here is the region definition:

<region name="EXTERNAL_RECORDS_1">
                <region-attributes concurrency-checks-enabled="false" 
data-policy="partition" disk-store-name="ExternalRecord-overflow" 
disk-synchronous="false">
                        <partition-attributes startup-recovery-delay="60000" 
redundant-copies="1" colocated-with="PWINFO_1" total-num-buckets="251"/>
                        <eviction-attributes>
                                <lru-entry-count maximum="200000" 
action="overflow-to-disk"/>
                        </eviction-attributes>
                </region-attributes>
                <index name="ERPartitionIDIndex" 
from-clause="/EXTERNAL_RECORDS_1.entrySet e" expression="e.key.PartitionID"/>
                <index name="ERFuncIndex" 
from-clause="/EXTERNAL_RECORDS_1.entrySet e" expression="e.key.Index"/>
        </region>
Thanks

From: Anilkumar Gingade [mailto:aging...@pivotal.io]
Sent: Wednesday, November 22, 2017 10:16 PM
To: dev@geode.apache.org
Cc: Gregory Vortman <gregory.vort...@amdocs.com>; u...@geode.apache.org; 
*Technology - Digital - BSS – Charging - GEODE team 
<pbgrcmrmgeodet...@int.amdocs.com>; Victoria Boriskovsky <victo...@amdocs.com>
Subject: Re: DiskStore exception while region data evicted

Hi Gregory,

Is the node stopped and restarted? If so how many times?
It looks like the evict operation freed up 2401bytes...But at that time the 
current bucket size (as per stats) is (2401 -1425 =) 976bytes...Since the 
updated bucket memory stat was -ve, the system thrown exception...

If you have reproducible scenario, can you send it across...Or steps to 
reproduce...
Can you send your cache xml (or region configuration). And logs and stats if 
possible.

-Anil.



On Tue, Nov 21, 2017 at 9:04 AM, Guy Turkenits 
<guy.turken...@amdocs.com<mailto:guy.turken...@amdocs.com>> wrote:
+ Viki

From: Gregory Vortman
Sent: Tuesday, November 21, 2017 6:49 PM
To: u...@geode.apache.org<mailto:u...@geode.apache.org>; 
dev@geode.apache.org<mailto:dev@geode.apache.org>
Cc: *Technology - Digital - BSS – Charging - GEODE team 
<pbgrcmrmgeodet...@int.amdocs.com<mailto:pbgrcmrmgeodet...@int.amdocs.com>>; 
Guy Turkenits <guy.turken...@amdocs.com<mailto:guy.turken...@amdocs.com>>
Subject: DiskStore exception while region data evicted

Hi team,
One of the grid members went down and entire cache is closed whenever Partition 
region got an LRU threshold and overflow to disk is started:
<lru-entry-count maximum="200000" action="overflow-to-disk"/>

Disk-store defined with 40GB.

Actual metrics while crashed: entries on disk 700000, bytes only on disk ~1GB.
There is much room in the File system.

Can you help to understand the following exception:

[severe 2017/11/21 15:41:05.678 IST host1-pwinfo1 <Asynchronous disk writer for 
region ExternalRecord-overflow> tid=0xdc] Fatal error from asynchronous flusher 
thread
org.apache.geode.InternalGemFireError: Bucket 
BucketRegion[path='/__PR/_B__EXTERNAL__RECORDS__1_171;serial=6025;primary=true] 
size (-1425) negative after applying delta of -2401
        at 
org.apache.geode.internal.cache.BucketRegion.updateBucketMemoryStats(BucketRegion.java:2291)
        at 
org.apache.geode.internal.cache.BucketRegion.updateBucket2Size(BucketRegion.java:2279)
        at 
org.apache.geode.internal.cache.BucketRegion.updateSizeOnEvict(BucketRegion.java:2157)
        at 
org.apache.geode.internal.cache.DiskEntry$Helper.writeEntryToDisk(DiskEntry.java:1441)
        at 
org.apache.geode.internal.cache.DiskEntry$Helper.doAsyncFlush(DiskEntry.java:1388)
        at 
org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1729)
        at java.lang.Thread.run(Thread.java:748)

[error 2017/11/21 15:41:05.679 IST host1-pwinfo1 <Asynchronous disk writer for 
region ExternalRecord-overflow> tid=0xdc] A DiskAccessException has occurred 
while writing to the disk for disk sto
re ExternalRecord-overflow. The cache will be closed.
org.apache.geode.cache.DiskAccessException: For DiskStore: 
ExternalRecord-overflow: Fatal error from asynchronous flusher thread, caused 
by org.apache.geode.InternalGemFireError: Bucket BucketRegion
[path='/__PR/_B__EXTERNAL__RECORDS__1_171;serial=6025;primary=true] size 
(-1425) negative after applying delta of -2401
        at 
org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1774)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.geode.InternalGemFireError: Bucket 
BucketRegion[path='/__PR/_B__EXTERNAL__RECORDS__1_171;serial=6025;primary=true] 
size (-1425) negative after applying delta of -2401
        at 
org.apache.geode.internal.cache.BucketRegion.updateBucketMemoryStats(BucketRegion.java:2291)
        at 
org.apache.geode.internal.cache.BucketRegion.updateBucket2Size(BucketRegion.java:2279)
        at 
org.apache.geode.internal.cache.BucketRegion.updateSizeOnEvict(BucketRegion.java:2157)
        at 
org.apache.geode.internal.cache.DiskEntry$Helper.writeEntryToDisk(DiskEntry.java:1441)
        at 
org.apache.geode.internal.cache.DiskEntry$Helper.doAsyncFlush(DiskEntry.java:1388)
        at 
org.apache.geode.internal.cache.DiskStoreImpl$FlusherThread.run(DiskStoreImpl.java:1729)
        ... 1 more

Thanks

Gregory Vortman


This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>

This message and the information contained herein is proprietary and 
confidential and subject to the Amdocs policy statement,

you may review at https://www.amdocs.com/about/email-disclaimer 
<https://www.amdocs.com/about/email-disclaimer>

Reply via email to