[jira] [Assigned] (GEODE-2918) ConflictingPersistentDataException is not handled properly

2017-05-23 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-2918:


Assignee: Anilkumar Gingade

> ConflictingPersistentDataException is not handled properly
> --
>
> Key: GEODE-2918
> URL: https://issues.apache.org/jira/browse/GEODE-2918
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>  Labels: storage_2
> Fix For: 1.2.0
>
>
> During disk recovery the ConflictingPersistentDataException is not handled 
> properly; it should have logged an error and closed the cache.
> When it is handled incorrectly, the cache is in inconsistent state; causing 
> other operations to fail in unexpected ways.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2918) ConflictingPersistentDataException is not handlled properly

2017-05-12 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2918:


 Summary: ConflictingPersistentDataException is not handlled 
properly
 Key: GEODE-2918
 URL: https://issues.apache.org/jira/browse/GEODE-2918
 Project: Geode
  Issue Type: Bug
  Components: persistence
Reporter: Anilkumar Gingade


During disk recovery the ConflictingPersistentDataException is not handled 
properly; it should have logged an error and closed the cache.

When it is handled incorrectly, the cache is in inconsistent state; causing 
other operations to fail in unexpected ways.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2776) The version tag on client event is not updated when an entry is added to server using load operation.

2017-05-08 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade resolved GEODE-2776.
--
Resolution: Fixed

> The version tag on client event is not updated when an entry is added to 
> server using load operation.
> -
>
> Key: GEODE-2776
> URL: https://issues.apache.org/jira/browse/GEODE-2776
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
> Fix For: 1.2.0
>
>
> When client does a get() which results in adding an entry by calling loader 
> on server side, the client event returned back is not updated with the 
> version tag that is created with the new entry on server. This results in 
> client having a different version tag than the server side entry. If client 
> has registered event, and is concurrently updating the entry (from get() call 
> and an register-event from server), it could result in data consistency 
> between client and server.
> Scenario 1:
> On Server invalidate happens, and the event is added to client queue.
> Client does get()
> On Server, the get() triggers load + put on server. And the response is sent 
> back.
> Client gets the result from get() (which is newer) and applies to its cache.
> Client gets invalid event (older than get), and it applies the event to the 
> cache (this is supposed to be conflated, but due to this bug its not 
> conflated).
> At the end server has valid entry in the cache but client has invalid entry.
> On Server: INVALID (First), Get(From Client, LOAD+PUT) (later)
> On Client: GET(), PUT using Get Response(), INVALID (old)
> Scenario 2:
> Client does get()
> On Server, the get() triggers load + put on server. And the response is sent 
> back.
> On Server invalidate happens, and the event is added to client queue.
> Client gets invalid event, and it applies the event to the cache.
> Client gets the result from get() (which is older than invalidate) and 
> applies to its cache (this is supposed to be conflated, but due to this bug 
> its not conflated).
> At the end server has invalid entry in the cache but client has valid entry 
> (old value).
> On Server: Get(From Client, LOAD+PUT), INVALID (later)
> On Client: GET() (new), INVALID (old), PUT using Get Response().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (GEODE-2802) TombstoneMessage can throw SerializationException when region is configured as persistent and non-persistent in cluster (in different nodes).

2017-05-08 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade resolved GEODE-2802.
--
   Resolution: Fixed
Fix Version/s: 1.2.0

> TombstoneMessage can throw SerializationException when region is configured 
> as persistent and non-persistent in cluster (in different nodes).
> -
>
> Key: GEODE-2802
> URL: https://issues.apache.org/jira/browse/GEODE-2802
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>  Labels: storage_2
> Fix For: 1.2.0
>
>
> TombstoneMessage serialization code assumes the member info in RVV to be 
> either membership-id or disk-id and uses this info while de-serializing.
> When there is a mix of persistent and non-persistent region in the cluster 
> (between nodes), the above assumption will not hold good; resulting in data 
> serialization exception.
> DistributedTombstoneOperation$TombstoneMessage
> toData() {
> -
> -
>  if (persistent) {
>   DiskStoreID id = new DiskStoreID();
>   InternalDataSerializer.invokeFromData(id, in);
>   mbr = id;
> } 
> -
> -



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-2661) CacheListener gets invoked when an non-existent entry is removed using removeAll

2017-04-27 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade updated GEODE-2661:
-
Description: 
When a non-existing entry is removed using removeAll from PartitionedRegion 
(need to verify this on replicated), the CacheListener's aftrerDestroy callback 
method gets invoked. The afterDestroy should not be invoked for entry which is 
not present.

How to reproduce.
region.put (k1, v1)
region.put (k2, v2)

// Remove all from client
List keys= Arrays.asList("k1", "k2", "k8");
region.removeAll(l); 

The afterDestroy call back will be invoked for k8. On server.


  was:
When a non-existing entry is removed using removeAll from PartitionedRegion 
(need to verify this on replicated), the CacheListener's aftrerDestroy callback 
method gets invoked. The afterDestroy should not be invoked for entry which is 
not present.

How to reproduce.
region.put (k1, v1)
region.put (k2, v2)

List keys= Arrays.asList("k1", "k2", "k8");
region.removeAll(l);

The afterDestroy call back will be invoked for k8.



> CacheListener gets invoked when an non-existent entry is removed using 
> removeAll
> 
>
> Key: GEODE-2661
> URL: https://issues.apache.org/jira/browse/GEODE-2661
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Lynn Gallinat
>  Labels: storage_2
>
> When a non-existing entry is removed using removeAll from PartitionedRegion 
> (need to verify this on replicated), the CacheListener's aftrerDestroy 
> callback method gets invoked. The afterDestroy should not be invoked for 
> entry which is not present.
> How to reproduce.
> region.put (k1, v1)
> region.put (k2, v2)
> // Remove all from client
> List keys= Arrays.asList("k1", "k2", "k8");
> region.removeAll(l); 
> The afterDestroy call back will be invoked for k8. On server.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2829) VMRegionVersionVector allows/stores DiskStoreId as its member id

2017-04-26 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2829:


 Summary: VMRegionVersionVector allows/stores DiskStoreId as its 
member id
 Key: GEODE-2829
 URL: https://issues.apache.org/jira/browse/GEODE-2829
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


The VMRegionVersionVector is a region version vector for regions without 
persistent data. This region version vector suppose to allow the 
InternalDistributedMember as the member id, but currently it allows both 
DiskStoreId and InternalDistributedMember as member ids.

This is in relation to ticket# GEODE-2802

The issue can be reproduced by having persistent and non-persistent region in 
the cluster (same region name). 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2802) TombstoneMessage can throw SerializationException when region is configured as persistent and non-persistent in cluster (in different nodes).

2017-04-19 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-2802:


Assignee: Anilkumar Gingade

> TombstoneMessage can throw SerializationException when region is configured 
> as persistent and non-persistent in cluster (in different nodes).
> -
>
> Key: GEODE-2802
> URL: https://issues.apache.org/jira/browse/GEODE-2802
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>
> TombstoneMessage serialization code assumes the member info in RVV to be 
> either membership-id or disk-id and uses this info while de-serializing.
> When there is a mix of persistent and non-persistent region in the cluster 
> (between nodes), the above assumption will not hold good; resulting in data 
> serialization exception.
> DistributedTombstoneOperation$TombstoneMessage
> toData() {
> -
> -
>  if (persistent) {
>   DiskStoreID id = new DiskStoreID();
>   InternalDataSerializer.invokeFromData(id, in);
>   mbr = id;
> } 
> -
> -



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2802) TombstoneMessage can throw SerializationException when region is configured as persistent and non-persistent in cluster (in different nodes).

2017-04-19 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2802:


 Summary: TombstoneMessage can throw SerializationException when 
region is configured as persistent and non-persistent in cluster (in different 
nodes).
 Key: GEODE-2802
 URL: https://issues.apache.org/jira/browse/GEODE-2802
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


TombstoneMessage serialization code assumes the member info in RVV to be either 
membership-id or disk-id and uses this info while de-serializing.
When there is a mix of persistent and non-persistent region in the cluster 
(between nodes), the above assumption will not hold good; resulting in data 
serialization exception.

DistributedTombstoneOperation$TombstoneMessage
toData() {
-
-
 if (persistent) {
  DiskStoreID id = new DiskStoreID();
  InternalDataSerializer.invokeFromData(id, in);
  mbr = id;
} 
-
-




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2776) The version tag on client event is not updated when an entry is added to server using load operation.

2017-04-12 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2776:


 Summary: The version tag on client event is not updated when an 
entry is added to server using load operation.
 Key: GEODE-2776
 URL: https://issues.apache.org/jira/browse/GEODE-2776
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


When client does a get() which results in adding an entry by calling loader on 
server side, the client event returned back is not updated with the version tag 
that is created with the new entry on server. This results in client having a 
different version tag than the server side entry. If client has registered 
event, and is concurrently updating the entry (from get() call and an 
register-event from server), it could result in data consistency between client 
and server.

Scenario 1:
On Server invalidate happens, and the event is added to client queue.
Client does get()
On Server, the get() triggers load + put on server. And the response is sent 
back.
Client gets the result from get() (which is newer) and applies to its cache.
Client gets invalid event (older than get), and it applies the event to the 
cache (this is supposed to be conflated, but due to this bug its not conflated).
At the end server has valid entry in the cache but client has invalid entry.

On Server: INVALID (First), Get(From Client, LOAD+PUT) (later)
On Client: GET(), PUT using Get Response(), INVALID (old)

Scenario 2:
Client does get()
On Server, the get() triggers load + put on server. And the response is sent 
back.
On Server invalidate happens, and the event is added to client queue.
Client gets invalid event, and it applies the event to the cache.
Client gets the result from get() (which is older than invalidate) and applies 
to its cache (this is supposed to be conflated, but due to this bug its not 
conflated).
At the end server has invalid entry in the cache but client has valid entry 
(old value).
On Server: Get(From Client, LOAD+PUT), INVALID (later)
On Client: GET() (new), INVALID (old), PUT using Get Response().



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Reopened] (GEODE-2398) Sporadic Oplog corruption due to channel.write failure

2017-03-15 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reopened GEODE-2398:
--

We need to make similar change (as done by ken in Oplog)  in OverflowOplog".

> Sporadic Oplog corruption due to channel.write failure
> --
>
> Key: GEODE-2398
> URL: https://issues.apache.org/jira/browse/GEODE-2398
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Kenneth Howe
>Assignee: Kenneth Howe
> Fix For: 1.2.0
>
>
> There have been some occurrences of Oplog corruption during testing that have 
> been traced to failures in writing oplog entries to the .crf file. When it 
> fails, Oplog.flush attempts to write a ByteBuffer to the file channel. The 
> call to channel.write(bb) method returns 0 bytes written, but the source 
> ByteBuffer position is moved to the ByteBuffer limit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2398) Sporadic Oplog corruption due to channel.write failure

2017-03-15 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-2398:


Assignee: Anilkumar Gingade  (was: Kenneth Howe)

> Sporadic Oplog corruption due to channel.write failure
> --
>
> Key: GEODE-2398
> URL: https://issues.apache.org/jira/browse/GEODE-2398
> Project: Geode
>  Issue Type: Bug
>  Components: persistence
>Reporter: Kenneth Howe
>Assignee: Anilkumar Gingade
> Fix For: 1.2.0
>
>
> There have been some occurrences of Oplog corruption during testing that have 
> been traced to failures in writing oplog entries to the .crf file. When it 
> fails, Oplog.flush attempts to write a ByteBuffer to the file channel. The 
> call to channel.write(bb) method returns 0 bytes written, but the source 
> ByteBuffer position is moved to the ByteBuffer limit.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2661) CacheListener gets invoked when an non-existent entry is removed using removeAll

2017-03-14 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2661:


 Summary: CacheListener gets invoked when an non-existent entry is 
removed using removeAll
 Key: GEODE-2661
 URL: https://issues.apache.org/jira/browse/GEODE-2661
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


When a non-existing entry is removed using removeAll from PartitionedRegion 
(need to verify this on replicated), the CacheListener's aftrerDestroy callback 
method gets invoked. The afterDestroy should not be invoked for entry which is 
not present.

How to reproduce.
region.put (k1, v1)
region.put (k2, v2)

List keys= Arrays.asList("k1", "k2", "k8");
region.removeAll(l);

The afterDestroy call back will be invoked for k8.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2490) Tombstone messages are getting processed inline

2017-02-15 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-2490:


Assignee: Anilkumar Gingade

> Tombstone messages are getting processed inline
> ---
>
> Key: GEODE-2490
> URL: https://issues.apache.org/jira/browse/GEODE-2490
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>
> Tombstone:
> As part of consistency checking, when an entry is destroyed, the member 
> temporarily retains the entry to detect possible conflicts with operations 
> that have occurred. The retained entry is referred to as a tombstone.
> When tombstones are removed, tombstone messages are sent to region replicas; 
> and in case of Partitioned Region (PR) messages are also sent to peer region 
> nodes for client events.
> Currently the tombstone message sent for replicas are getting processed 
> in-line. Based on the number of nodes in the cluster, this may take long time 
> to process, impacting other cache operation that required to be processed 
> in-line. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2490) Tombstone messages are getting processed inline

2017-02-15 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2490:


 Summary: Tombstone messages are getting processed inline
 Key: GEODE-2490
 URL: https://issues.apache.org/jira/browse/GEODE-2490
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


Tombstone:
As part of consistency checking, when an entry is destroyed, the member 
temporarily retains the entry to detect possible conflicts with operations that 
have occurred. The retained entry is referred to as a tombstone.

When tombstones are removed, tombstone messages are sent to region replicas; 
and in case of Partitioned Region (PR) messages are also sent to peer region 
nodes for client events.

Currently the tombstone message sent for replicas are getting processed 
in-line. Based on the number of nodes in the cluster, this may take long time 
to process, impacting other cache operation that required to be processed 
in-line. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (GEODE-2489) Tombstone message with keys are sent to peer partitioned region nodes even though no clinets are registered

2017-02-15 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-2489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade reassigned GEODE-2489:


Assignee: Anilkumar Gingade

> Tombstone message with keys are sent to peer partitioned region nodes even 
> though no clinets are registered
> ---
>
> Key: GEODE-2489
> URL: https://issues.apache.org/jira/browse/GEODE-2489
> Project: Geode
>  Issue Type: Bug
>  Components: regions
>Reporter: Anilkumar Gingade
>Assignee: Anilkumar Gingade
>
> Tombstone:
> As part of consistency checking,  when an entry is destroyed, the member 
> temporarily retains the entry to detect possible conflicts with operations 
> that have occurred. The retained entry is referred to as a tombstone. 
> When tombstones are removed, tombstone messages are sent to region replicas; 
> and in case of Partitioned Region (PR) messages are also sent to peer region 
> nodes for client events.
> Currently tombstone messages meant for clients that have all the keys removed 
> are getting sent to peer PR nodes even though no clients are registered on 
> those peers.
> Based on the number tombstone keys processed (by default 10) this could 
> be large message sent to peer node which could impact the performance of the 
> system/cluster.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (GEODE-2489) Tombstone message with keys are sent to peer partitioned region nodes even though no clinets are registered

2017-02-15 Thread Anilkumar Gingade (JIRA)
Anilkumar Gingade created GEODE-2489:


 Summary: Tombstone message with keys are sent to peer partitioned 
region nodes even though no clinets are registered
 Key: GEODE-2489
 URL: https://issues.apache.org/jira/browse/GEODE-2489
 Project: Geode
  Issue Type: Bug
  Components: regions
Reporter: Anilkumar Gingade


Tombstone:
As part of consistency checking,  when an entry is destroyed, the member 
temporarily retains the entry to detect possible conflicts with operations that 
have occurred. The retained entry is referred to as a tombstone. 

When tombstones are removed, tombstone messages are sent to region replicas; 
and in case of Partitioned Region (PR) messages are also sent to peer region 
nodes for client events.

Currently tombstone messages meant for clients that have all the keys removed 
are getting sent to peer PR nodes even though no clients are registered on 
those peers.

Based on the number tombstone keys processed (by default 10) this could be 
large message sent to peer node which could impact the performance of the 
system/cluster.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (GEODE-1672) When amount of overflowed persisted data exceeds heap size startup may run out of memory

2017-02-07 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade closed GEODE-1672.


> When amount of overflowed persisted data exceeds heap size startup may run 
> out of memory
> 
>
> Key: GEODE-1672
> URL: https://issues.apache.org/jira/browse/GEODE-1672
> Project: Geode
>  Issue Type: Bug
>  Components: docs, persistence
>Reporter: Darrel Schneider
>Assignee: Anilkumar Gingade
> Fix For: 1.2.0
>
>
> Basically, when the amount of data overflowed approaches the heap size, ,such 
> that the total amount of data is very close to or actually surpasses your 
> total tenured heap, it is possible that you will not be able to restart.
> The algorithm during recovery of oplogs/buckets is such that we don't "evict" 
> in the normal sense as data fills the heap during early stages of recovery 
> prior to creating the regions. When the data is first created in the heap, 
> it's not yet official in the region.
> At any rate, if during this early phase of recovery, or during subsequent 
> phase where eviction is working as usual, it is possible that the total data 
> or an early imbalance of buckets prior to the opportunity to rebalance causes 
> us to surpass the critical threshold which will kill us before successful 
> startup.
> To reproduce, you could have 1 region with tons of data that evicts and 
> overflows with persistence. Call it R1. Then another region with persistence 
> that does not evict. Call it R2.
> List R1 fist in the cache.xml file. Start running the system and add data 
> over time until you have overflowed tons of data approaching the heap size in 
> the evicted region, and also have enough data in the R2 region.
> Once you fill these regions with enough data and have overflowed enough to 
> disk and persisted the other region, then shutdown, and then attempt to 
> restart. If you put enough data in, you will hit the critical threshold 
> before being able to complete startup.
> You can work around this issue by configuring geode to not recovery values by 
> setting this system property: -Dgemfire.disk.recoverValues=false
> Values will not be faulted into memory until a read operation is done on that 
> value's key.
> If you have regions that do not use overflow and some that do then another 
> work around is the create the regions that do not use overflow first. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (GEODE-1211) Multiple Regions using the same DiskStore cause double-counting in the member TotalDiskUsage JMX attribute

2016-12-19 Thread Anilkumar Gingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/GEODE-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anilkumar Gingade updated GEODE-1211:
-
Component/s: (was: persistence)

> Multiple Regions using the same DiskStore cause double-counting in the member 
> TotalDiskUsage JMX attribute
> --
>
> Key: GEODE-1211
> URL: https://issues.apache.org/jira/browse/GEODE-1211
> Project: Geode
>  Issue Type: Bug
>  Components: jmx, management, statistics
>Reporter: Barry Oglesby
>
> Here is what I'm seeing in my simple tests.
> After putting with 1 entries into two persistent replicated regions each 
> using its own disk store:
> {{MemberMBean.getTotalDiskUsage}} totals each disk store bytes on disk 
> properly:
> {noformat}
> MemberMBean.getTotalDiskUsage returning 1298573 bytes
> DiskStoreMBeanBridge.getTotalBytesOnDisk data-rr_store returning 649253 bytes
> DiskStoreMBeanBridge.getTotalBytesOnDisk data2-rr_store returning 649320 bytes
> {noformat}
> After putting with 1 entries into two persistent replicated regions each 
> using the same disk store:
> {{MemberMBean.getTotalDiskUsage}} double-counts the disk store bytes on disk:
> {noformat}
> MemberMBean.getTotalDiskUsage returning 2596956 bytes
> DiskStoreMBeanBridge.getTotalBytesOnDisk data-rr_store returning 1298478 bytes
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)