Hi Gregory,

Is the node stopped and restarted? If so how many times?
It looks like the evict operation freed up 2401bytes...But at that time the
current bucket size (as per stats) is (2401 -1425 =) 976bytes...Since the
updated bucket memory stat was -ve, the system thrown exception...

If you have reproducible scenario, can you send it across...Or steps to
reproduce...
Can you send your cache xml (or region configuration). And logs and stats
if possible.

-Anil.



On Tue, Nov 21, 2017 at 9:04 AM, Guy Turkenits <guy.turken...@amdocs.com>
wrote:

> + Viki
>
> From: Gregory Vortman
> Sent: Tuesday, November 21, 2017 6:49 PM
> To: u...@geode.apache.org; dev@geode.apache.org
> Cc: *Technology - Digital - BSS – Charging - GEODE team <
> pbgrcmrmgeodet...@int.amdocs.com>; Guy Turkenits <guy.turken...@amdocs.com
> >
> Subject: DiskStore exception while region data evicted
>
> Hi team,
> One of the grid members went down and entire cache is closed whenever
> Partition region got an LRU threshold and overflow to disk is started:
> <lru-entry-count maximum="200000" action="overflow-to-disk"/>
>
> Disk-store defined with 40GB.
>
> Actual metrics while crashed: entries on disk 700000, bytes only on disk
> ~1GB.
> There is much room in the File system.
>
> Can you help to understand the following exception:
>
> [severe 2017/11/21 15:41:05.678 IST host1-pwinfo1 <Asynchronous disk
> writer for region ExternalRecord-overflow> tid=0xdc] Fatal error from
> asynchronous flusher thread
> org.apache.geode.InternalGemFireError: Bucket
> BucketRegion[path='/__PR/_B__EXTERNAL__RECORDS__1_171;serial=6025;primary=true]
> size (-1425) negative after applying delta of -2401
>         at org.apache.geode.internal.cache.BucketRegion.
> updateBucketMemoryStats(BucketRegion.java:2291)
>         at org.apache.geode.internal.cache.BucketRegion.updateBucket2Size(
> BucketRegion.java:2279)
>         at org.apache.geode.internal.cache.BucketRegion.updateSizeOnEvict(
> BucketRegion.java:2157)
>         at org.apache.geode.internal.cache.DiskEntry$Helper.
> writeEntryToDisk(DiskEntry.java:1441)
>         at org.apache.geode.internal.cache.DiskEntry$Helper.
> doAsyncFlush(DiskEntry.java:1388)
>         at org.apache.geode.internal.cache.DiskStoreImpl$
> FlusherThread.run(DiskStoreImpl.java:1729)
>         at java.lang.Thread.run(Thread.java:748)
>
> [error 2017/11/21 15:41:05.679 IST host1-pwinfo1 <Asynchronous disk writer
> for region ExternalRecord-overflow> tid=0xdc] A DiskAccessException has
> occurred while writing to the disk for disk sto
> re ExternalRecord-overflow. The cache will be closed.
> org.apache.geode.cache.DiskAccessException: For DiskStore:
> ExternalRecord-overflow: Fatal error from asynchronous flusher thread,
> caused by org.apache.geode.InternalGemFireError: Bucket BucketRegion
> [path='/__PR/_B__EXTERNAL__RECORDS__1_171;serial=6025;primary=true] size
> (-1425) negative after applying delta of -2401
>         at org.apache.geode.internal.cache.DiskStoreImpl$
> FlusherThread.run(DiskStoreImpl.java:1774)
>         at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.geode.InternalGemFireError: Bucket
> BucketRegion[path='/__PR/_B__EXTERNAL__RECORDS__1_171;serial=6025;primary=true]
> size (-1425) negative after applying delta of -2401
>         at org.apache.geode.internal.cache.BucketRegion.
> updateBucketMemoryStats(BucketRegion.java:2291)
>         at org.apache.geode.internal.cache.BucketRegion.updateBucket2Size(
> BucketRegion.java:2279)
>         at org.apache.geode.internal.cache.BucketRegion.updateSizeOnEvict(
> BucketRegion.java:2157)
>         at org.apache.geode.internal.cache.DiskEntry$Helper.
> writeEntryToDisk(DiskEntry.java:1441)
>         at org.apache.geode.internal.cache.DiskEntry$Helper.
> doAsyncFlush(DiskEntry.java:1388)
>         at org.apache.geode.internal.cache.DiskStoreImpl$
> FlusherThread.run(DiskStoreImpl.java:1729)
>         ... 1 more
>
> Thanks
>
> Gregory Vortman
>
>
> This message and the information contained herein is proprietary and
> confidential and subject to the Amdocs policy statement,
>
> you may review at https://www.amdocs.com/about/email-disclaimer <
> https://www.amdocs.com/about/email-disclaimer>
>

Reply via email to