kirklund commented on a change in pull request #6225:
URL: https://github.com/apache/geode/pull/6225#discussion_r609909199



##########
File path: 
geode-core/src/main/java/org/apache/geode/distributed/internal/ClusterDistributionManager.java
##########
@@ -1881,9 +1881,8 @@ public void 
handleManagerDeparture(InternalDistributedMember theId, boolean memb
     removeHostedLocators(theId);
     redundancyZones.remove(theId);
 
-    if (theId.getVmKind() != ClusterDistributionManager.LOCATOR_DM_TYPE) {
-      stats.incNodes(-1);
-    }
+    stats.incNodes(-1);
+

Review comment:
       I understand the purpose of the PR better now. However, the PR is still 
trying to redefine the "nodes" statistic in DistributionStats which is not 
correct. You should instead add a new attribute to MemberMXBean that returns 
the count of OTHER members in the cluster.
   
   Here is the definition of the "nodes" statistic:
   ```
       final String nodesDesc = "The current number of nodes in this 
distributed system.";
   ```
   So, if you have 2 servers and 2 locators, "nodes" should be 4 within all 4 
JVMs.
   
   The earliest version of GemFire had locators which were not members (ie not 
nodes), but locators now are definitely members (ie nodes). The next type of 
locators in GemFire were members (ie nodes) but without caches. As of GemFire 
7.0, all locators have a cache. As of Geode 1.0, all locators are members with 
caches. The methods to create that earliest type of locator still exists but 
are deprecated... so deprecated internal methods is now the only way to create 
the thin locators that are not peer-to-peer members of the cluster.
   
   Changing the attribute of an MXBean is _different_ from changing the meaning 
of the statistic. The statistic is not supposed to ignore locators.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to