dschneider-pivotal commented on a change in pull request #6430:
URL: https://github.com/apache/geode/pull/6430#discussion_r650288278



##########
File path: 
geode-core/src/integrationTest/java/org/apache/geode/internal/cache/PartitionedRegionStatsJUnitTest.java
##########
@@ -477,4 +485,139 @@ private long getMemBytes(PartitionedRegion pr) {
 
     return bytes;
   }
+
+  @Test
+  public void incBucketClearCountIncrementsClears() {
+    String regionName = "testStats";
+    int localMaxMemory = 100;
+    PartitionedRegion pr = createPR(regionName + 1, localMaxMemory, 0);
+
+    pr.getPrStats().incBucketClearCount();
+
+    
assertThat(pr.getPrStats().getStats().getLong(bucketClearsId)).isEqualTo(1L);
+  }
+
+  @Test
+  public void bucketClearsWrapsFromMaxLongToNegativeValue() {
+    String regionName = "testStats";
+    int localMaxMemory = 100;
+    PartitionedRegion pr = createPR(regionName + 1, localMaxMemory, 0);
+    PartitionedRegionStats partitionedRegionStats = pr.getPrStats();
+    partitionedRegionStats.getStats().incLong(bucketClearsId, Long.MAX_VALUE);
+
+    partitionedRegionStats.incBucketClearCount();
+
+    assertThat(partitionedRegionStats.getBucketClearCount()).isNegative();
+  }
+
+  @Test
+  public void testPartitionedRegionClearStats() {
+    String regionName = "testStats";
+    int localMaxMemory = 100;
+    PartitionedRegion pr = createPR(regionName + 1, localMaxMemory, 0);
+
+    final int bucketMax = pr.getTotalNumberOfBuckets();
+    for (long i = 0L; i < 10000; i++) {
+      try {
+        pr.put(i, i);
+      } catch (PartitionedRegionStorageException ex) {
+        this.logger.warning(ex);
+      }
+    }
+
+    assertThat(pr.getPrStats().getTotalBucketCount()).isEqualTo(bucketMax);
+    assertThat(pr.size()).isEqualTo(10000);
+    pr.clear();
+    assertThat(pr.size()).isEqualTo(0);
+    
assertThat(pr.getPrStats().getStats().getLong(bucketClearsId)).isEqualTo(bucketMax);
+  }
+
+  @Test
+  public void testBasicPartitionedRegionClearTimeStat() {
+    String regionName = "testStats";
+    int localMaxMemory = 100;
+    PartitionedRegion pr = createPR(regionName + 1, localMaxMemory, 0);
+    assertThat(pr.getPrStats().getBucketClearTime()).isEqualTo(0L);
+
+    pr.getPrStats().incBucketClearTime(137L);
+    assertThat(pr.getPrStats().getBucketClearTime()).isEqualTo(137L);
+  }
+
+  @Test
+  public void testFullPartitionedRegionClearTimeStat() {
+    String regionName = "testStats";
+    int localMaxMemory = 100;
+    PartitionedRegion pr = createPR(regionName + 1, localMaxMemory, 0);
+
+    for (long i = 0L; i < 10000; i++) {
+      try {
+        pr.put(i, i);
+      } catch (PartitionedRegionStorageException ex) {
+        this.logger.warning(ex);
+      }
+    }
+
+    assertThat(pr.size()).isEqualTo(10000);
+    assertThat(pr.getPrStats().getBucketClearCount()).isEqualTo(0L);
+
+    assertThat(pr.getPrStats().getBucketClearTime()).isEqualTo(0L);
+    pr.clear();
+    assertThat(pr.getPrStats().getBucketClearCount()).isGreaterThan(0L);
+
+    assertThat(pr.getPrStats().getBucketClearTime()).isGreaterThan(0L);

Review comment:
       If you could have the test insert its own implementation of 
StatisticsClock then it would be deterministic. Someone did the work of adding 
the StatisticsClock interface but probably just for unit testing. 
GemFireCacheImpl calls the static method StatisticsClockFactory.clock(boolean) 
and ends up passing this clock into the stats when they are created. Kirk might 
have some ideas of how we could get tests that are testing time stats to have a 
deterministic clock instead of a real one. These time stats tend to always fail 
once in a while so it would be great to find a solution.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to