[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-09-25 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r494922796



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   Hi @agingade ,
   
   Sorry for bothering, but this request change hangs here for a long period of 
time. If you don't have time or my comments are not clear please don't hesitate 
to contact me, but I would be really grateful if we could decide how to proceed 
with this PR.






This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-09-25 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r494922796



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   Hi @agingade ,
   
   Sorry for bothering, but this request change hangs here for a long period of 
time. If you don't have time or my comments are not clear please don't hesitate 
to contact me, but I would be really grateful if we could decide how to proceed 
with this PR.






This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-07-21 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r457926652



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   Hi @agingade ,
   
   Have you had a time to check above comments?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-07-13 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r453489804



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   I have tested TC with redundancy configured and it seems that the 
recovery of CQs is done a differently in this case. The primary server sends 
within `InitialImageOperation$FilterInfoMessage` all releveant CQs information 
to the starting server. At the reception of the message the starting server 
then registers CQs as durable (so no problem in this case observed).
   
   **Primary server:**
   ```
   [debug 2020/07/10 13:30:54.916 CEST :41001 shared unordered uid=1 local port=53683 
remote port=45674> tid=0x57] Received message 
'InitialImageOperation$RequestFilterInfoMessage(region 
path='/_gfe_durable_client_with_id_AppCounters_1_queue'; 
sender=192.168.1.102(server3:31347):41001; processorId=27)' from 
<192.168.1.102(server3:31347):41001>
   ```
   
   **Starting server:**
   ```
   [debug 2020/07/10 13:30:54.916 CEST  
tid=0x48] Sending (InitialImageOperation$RequestFilterInfoMessage(region 
path='/_gfe_durable_client_with_id_AppCounters_1_queue'; 
sender=192.168.1.102(server3:31347):41001; processorId=27)) to 1 peers 
([192.168.1.102(server1:30862):41000]) via tcp/ip
   
   [debug 2020/07/10 13:30:54.918 CEST :41000 shared unordered uid=5 local port=52175 
remote port=46552> tid=0x30] Received message 
'InitialImageOperation$FilterInfoMessage processorId=27 from 
192.168.1.102(server1:30862):41000; NON_DURABLE allKeys=0; allKeysInv=0; 
keysOfInterest=0; keysOfInterestInv=0; patternsOfInterest=0; 
patternsOfInterestInv=0; filtersOfInterest=0; filtersOfInterestInv=0; DURABLE 
allKeys=0; allKeysInv=0; keysOfInterest=0; keysOfInterestInv=0; 
patternsOfInterest=0; patternsOfInterestInv=0; filtersOfInterest=0; 
filtersOfInterestInv=0; cqs=1' from <192.168.1.102(server1:30862):41000>
   
   [debug 2020/07/10 13:30:54.919 CEST  tid=0x3d] Processing FilterInfo for proxy: 
CacheClientProxy[identity(192.168.1.102(31226:loner):45576:8b927d38,connection=1,durableAttributes=DurableClientAttributes[id=AppCounters;
 timeout=200]); port=57552; primary=false; version=GEODE 1.12.0] : 
InitialImageOperation$FilterInfoMessage processorId=27 from 
192.168.1.102(server1:30862):41000; NON_DURABLE allKeys=0; allKeysInv=0; 
keysOfInterest=0; keysOfInterestInv=0; patternsOfInterest=0; 
patternsOfInterestInv=0; filtersOfInterest=0; filtersOfInterestInv=0; DURABLE 
allKeys=0; allKeysInv=0; keysOfInterest=0; keysOfInterestInv=0; 
patternsOfInterest=0; patternsOfInterestInv=0; filtersOfInterest=0; 
filtersOfInterestInv=0; cqs=1
   
   [debug 2020/07/10 13:30:54.944 CEST  tid=0x3d] Server side query for the cq: randomTracker is: SELECT * FROM 
/example-region i where i > 70
   [debug 2020/07/10 13:30:54.944 CEST  tid=0x3d] Added CQ to the base region: /example-region With key as: 
randomTracker__AppCounters
   [debug 2020/07/10 13:30:54.944 CEST  tid=0x3d] Adding CQ into MatchingCQ map, CQName: randomTracker__AppCounters 
Number of matched querys are: 1
   [debug 2020/07/10 13:30:54.945 CEST  tid=0x3d] Adding to CQ Repository. CqName : randomTracker ServerCqName : 
randomTracker__AppCounters
   [debug 2020/07/10 13:30:54.945 CEST  tid=0x3d] Adding CQ randomTracker__AppCounters to this members FilterProfile.
   [debug 2020/07/10 13:30:54.945 CEST  tid=0x3d] Successfully created CQ on the server. CqName : randomTracker
   ```
   
   I can attach full logs if you need. Also, I have found the the following 
comment in the client code:
   ```
   // Even though the new redundant queue will usually recover
   // subscription information (see bug #39014) from its initial
   // image provider, in bug #42280 we found that this is not always
   // the case, so clients must always register interest with the new
   // redundant server.
   if (recoverInterest) {
 recoverInterest(queueConnection, isFirstNewConnection);
   }
   ```
   It is stated here the there is possible case when redundant queue isn't 
recovered by `InitialImageOperation$FilterInfoMessage`, but I haven't been able 
to reproduce that case. Do you see any benefit in finding and creating TC for 
this scenario, since recovery of durable CQ is already tested with TC without 
redundancy?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this 

[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-07-13 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r453607399



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   Thanks for your comments!
   
   Related to the Interest recovery, I have tried following case:
   ```
   Start three servers. Start client with following config:
   - redundnacy set to 0
   - register non-durable Interests
   - configure durable id
   ```
   
   After I shutdown primary server I expected that the client should 
register/recover Interests on the another running server. I have tried exactly 
same case with and without the code you suggested. What I have noticed that 
some steps are missing related to the recovery of non-durable Interest when 
using solution you suggested (please check logs below). Is this expected?
   
   without your code:
   ```
   [debug 2020/07/13 13:56:37.574 CEST  
tid=0x5a] SubscriptionManager redundancy satisfier - Non backup server was made 
primary. Recovering interest jakov:26486
   [info 2020/07/13 13:56:37.574 CEST :41002 port 26486> tid=0x5b] Cache Client Updater 
Thread  on 192.168.1.101(12538):41002 port 26486 (jakov:26486) : ready to 
process messages.
   [debug 2020/07/13 13:56:37.576 CEST  
tid=0x5a] 
org.apache.geode.cache.client.internal.QueueManagerImpl@69610f07.recoverSingleRegion
 starting kind=KEY region=/HAInterestBaseTest_region: {k1=KEYS, k2=KEYS}
   [debug 2020/07/13 13:56:37.576 CEST  
tid=0x5a] registerInterestsStarted: new count = 1
   [debug 2020/07/13 13:56:37.578 CEST  
tid=0x5a] localDestroyNoCallbacks key=k2
   [debug 2020/07/13 13:56:37.579 CEST  
tid=0x5a] basicDestroyPart2: k2, version=null
   [debug 2020/07/13 13:56:37.580 CEST  
tid=0x5a] VersionedThinRegionEntryHeapStringKey1@1f47aafa (key=k2; 
rawValue=REMOVED_PHASE1; version={v1; rv2; mbr=192.168.1.101(12538):41002; 
time=1594641397170};member=192.168.1.101(12538):41002) dispatching event 
EntryEventImpl[op=LOCAL_DESTROY;region=/HAInterestBaseTest_region;key=k2;callbackArg=null;originRemote=false;originMember=jakov(12395:loner):57906:02b10848]
   [debug 2020/07/13 13:56:37.580 CEST  
tid=0x5a] localDestroyNoCallbacks key=k1
   [debug 2020/07/13 13:56:37.580 CEST  
tid=0x5a] basicDestroyPart2: k1, version=null
   [debug 2020/07/13 13:56:37.580 CEST  
tid=0x5a] VersionedThinRegionEntryHeapStringKey1@5ecf6c9c (key=k1; 
rawValue=REMOVED_PHASE1; version={v1; rv1; mbr=192.168.1.101(12538):41002; 
time=1594641397148};member=192.168.1.101(12538):41002) dispatching event 
EntryEventImpl[op=LOCAL_DESTROY;region=/HAInterestBaseTest_region;key=k1;callbackArg=null;originRemote=false;originMember=jakov(12395:loner):57906:02b10848]
   [debug 2020/07/13 13:56:37.580 CEST  
tid=0x5a] 
org.apache.geode.cache.client.internal.QueueManagerImpl@69610f07.recoverSingleRegion
 :Endpoint recovered is primary so clearing the keys of interest starting 
kind=KEY region=/HAInterestBaseTest_region: [k1, k2]
   [debug 2020/07/13 13:56:37.584 CEST  
tid=0x5a] 
org.apache.geode.internal.cache.LocalRegion[path='/HAInterestBaseTest_region';scope=LOCAL';dataPolicy=NORMAL;
 concurrencyChecksEnabled] refreshEntriesFromServerKeys count=2 policy=KEYS
 k1
 k2
   [debug 2020/07/13 13:56:37.584 CEST  
tid=0x5a] refreshEntries region=/HAInterestBaseTest_region
   [debug 2020/07/13 13:56:37.585 CEST  
tid=0x5a] registerInterestCompleted: new value = 0
   [debug 2020/07/13 13:56:37.585 CEST  
tid=0x5a] registerInterestCompleted: Signalling end of register-interest
   [debug 2020/07/13 13:56:37.586 CEST  
tid=0x5a] Primary recovery not needed
   
   ```
   
   with your code:
   ```
   [debug 2020/07/13 13:44:20.028 CEST  
tid=0x5a] SubscriptionManager redundancy satisfier - Non backup server was made 
primary. Recovering interest jakov:28101
   [info 2020/07/13 13:44:20.028 CEST :41002 port 28101> tid=0x5b] Cache Client Updater 
Thread  on 192.168.1.101(11053):41002 port 28101 (jakov:28101) : ready to 
process messages.
   [debug 2020/07/13 13:44:20.030 CEST  
tid=0x5a] Primary recovery not needed
   ```
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-07-13 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r453489804



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   I have tested TC with redundancy configured and it seems that the 
recovery of CQs is done a differently in this case. The remaining server sends 
within `InitialImageOperation$FilterInfoMessage` all releveant CQs information 
to the starting server. At the reception of the message the starting server 
then registers CQs as durable (so no problem in this case observed).
   
   **Primary server:**
   ```
   [debug 2020/07/10 13:30:54.916 CEST :41001 shared unordered uid=1 local port=53683 
remote port=45674> tid=0x57] Received message 
'InitialImageOperation$RequestFilterInfoMessage(region 
path='/_gfe_durable_client_with_id_AppCounters_1_queue'; 
sender=192.168.1.102(server3:31347):41001; processorId=27)' from 
<192.168.1.102(server3:31347):41001>
   ```
   
   **Starting server:**
   ```
   [debug 2020/07/10 13:30:54.916 CEST  
tid=0x48] Sending (InitialImageOperation$RequestFilterInfoMessage(region 
path='/_gfe_durable_client_with_id_AppCounters_1_queue'; 
sender=192.168.1.102(server3:31347):41001; processorId=27)) to 1 peers 
([192.168.1.102(server1:30862):41000]) via tcp/ip
   
   [debug 2020/07/10 13:30:54.918 CEST :41000 shared unordered uid=5 local port=52175 
remote port=46552> tid=0x30] Received message 
'InitialImageOperation$FilterInfoMessage processorId=27 from 
192.168.1.102(server1:30862):41000; NON_DURABLE allKeys=0; allKeysInv=0; 
keysOfInterest=0; keysOfInterestInv=0; patternsOfInterest=0; 
patternsOfInterestInv=0; filtersOfInterest=0; filtersOfInterestInv=0; DURABLE 
allKeys=0; allKeysInv=0; keysOfInterest=0; keysOfInterestInv=0; 
patternsOfInterest=0; patternsOfInterestInv=0; filtersOfInterest=0; 
filtersOfInterestInv=0; cqs=1' from <192.168.1.102(server1:30862):41000>
   
   [debug 2020/07/10 13:30:54.919 CEST  tid=0x3d] Processing FilterInfo for proxy: 
CacheClientProxy[identity(192.168.1.102(31226:loner):45576:8b927d38,connection=1,durableAttributes=DurableClientAttributes[id=AppCounters;
 timeout=200]); port=57552; primary=false; version=GEODE 1.12.0] : 
InitialImageOperation$FilterInfoMessage processorId=27 from 
192.168.1.102(server1:30862):41000; NON_DURABLE allKeys=0; allKeysInv=0; 
keysOfInterest=0; keysOfInterestInv=0; patternsOfInterest=0; 
patternsOfInterestInv=0; filtersOfInterest=0; filtersOfInterestInv=0; DURABLE 
allKeys=0; allKeysInv=0; keysOfInterest=0; keysOfInterestInv=0; 
patternsOfInterest=0; patternsOfInterestInv=0; filtersOfInterest=0; 
filtersOfInterestInv=0; cqs=1
   
   [debug 2020/07/10 13:30:54.944 CEST  tid=0x3d] Server side query for the cq: randomTracker is: SELECT * FROM 
/example-region i where i > 70
   [debug 2020/07/10 13:30:54.944 CEST  tid=0x3d] Added CQ to the base region: /example-region With key as: 
randomTracker__AppCounters
   [debug 2020/07/10 13:30:54.944 CEST  tid=0x3d] Adding CQ into MatchingCQ map, CQName: randomTracker__AppCounters 
Number of matched querys are: 1
   [debug 2020/07/10 13:30:54.945 CEST  tid=0x3d] Adding to CQ Repository. CqName : randomTracker ServerCqName : 
randomTracker__AppCounters
   [debug 2020/07/10 13:30:54.945 CEST  tid=0x3d] Adding CQ randomTracker__AppCounters to this members FilterProfile.
   [debug 2020/07/10 13:30:54.945 CEST  tid=0x3d] Successfully created CQ on the server. CqName : randomTracker
   ```
   
   I can attach full logs if you need. Also, I have found the the following 
comment in the client code:
   ```
   // Even though the new redundant queue will usually recover
   // subscription information (see bug #39014) from its initial
   // image provider, in bug #42280 we found that this is not always
   // the case, so clients must always register interest with the new
   // redundant server.
   if (recoverInterest) {
 recoverInterest(queueConnection, isFirstNewConnection);
   }
   ```
   It is stated here the there is possible case when redundant queue isn't 
recovered by `InitialImageOperation$FilterInfoMessage`, but I haven't been able 
to reproduce that case. Do you see any benefit in finding and creating TC for 
this scenario, since recovery of durable CQ is already tested with TC without 
redundancy?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this 

[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-07-10 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r452734828



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   Hi @agingade ,
   
   Thanks for your comments!
   
   My understanding was that it is possible to configure durable client 
identity with the combination of non-durable and durable CQs/Interests. For 
Example the parameters and API used to configure durable client identity, CQs 
and region Interests :
   
   ```
   // API
   registerInterestForAllKeys(InterestResultPolicy policy, boolean isDurable)
   newCq(String queryString, CqAttributes cqAttr, boolean isDurable)
   // Parameters
   durable-client-id, durable-client-timeout
   ```
   
   If you look at the `recoverAllInterestTypes` function code with this in 
mind, then the code there make sense:
  
   -  First register non-durable CQs and Interests
   -  In case of durable client identity, then also register durable CQs and 
Interests (Not possible to have durable CQ/Interests without durable client 
identity)
   
   Also when we look deeper at function level in the 
`recoverInterestList->recoverSingleList->getRegionToInterestsMap->getInterestLookupIndex(isDurable,
 receiveUpdatesAsInvalidates)` :
   
   ```
   public static int getInterestLookupIndex(boolean isDurable, boolean 
receiveUpdatesAsInvalidates) {
 if (isDurable) {
   if (receiveUpdatesAsInvalidates) {
 return durableInterestListIndexForUpdatesAsInvalidates;
   } else {
 return durableInterestListIndex;
   }
 } else {
   if (receiveUpdatesAsInvalidates) {
 return interestListIndexForUpdatesAsInvalidates;
   } else {
 return interestListIndex;
   }
 }
   }
   ```
   It filters Interest lists based on the durable flag. So in 
`recoverAllInterestTypes` function this would result that the non-durable 
Interests are recovered first, and then the durable (If durable client identity 
is configured).
   
   My solution was to simply follow this approach with CQs and to first 
register non-durable CQs, and then durable. I have used `CqQuery.isDurable() 
`flag since it is set with following API:
   
   `newCq(String queryString, CqAttributes cqAttr, **boolean isDurable**)`
   
   I think that the code you proposed would not register **non-durable** 
Interests, and would register only **durable** Interests in case durable client 
is configured. Additionally in case of durable client, it would register all 
CQs as **durable** (even those that are configured as non-durable). I will try 
to perform some test with the solution you proposed, since there are couple 
cases it will take some time.
   
   Related to your comment about redundancy, I have already tested with 
configured redundancy and didn't observe any problem in that case. I will take 
a look at that case in more details, since it seems that the recovery of CQs 
and Interests is performed differently than when redundancy is not configured 
for subscriptions event queues. 
   
   BR/Jakov





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-07-10 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r452734828



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   Hi @agingade ,
   
   Thanks for your comments!
   
   My understanding was that it is possible to configure durable client 
identity with the combination of non-durable and durable CQs/Interests. For 
Example the parameters and API used to configure durable client identity, CQs 
and region Interests :
   
   ```
   // API
   registerInterestForAllKeys(InterestResultPolicy policy, boolean isDurable)
   newCq(String queryString, CqAttributes cqAttr, boolean isDurable)
   // Parameters
   durable-client-id, durable-client-timeout
   ```
   
   If you look at the `recoverAllInterestTypes` function code with this in 
mind, then the code there make sense:
  
   -  First register non-durable CQs and Interests
   -  In case of durable client identity, then also register durable CQs and 
Interests (Not possible to have durable CQ/Interests without durable client 
identity)
   
   Also when we look deeper at function level in the 
`recoverInterestList->recoverSingleList->getRegionToInterestsMap->getInterestLookupIndex(isDurable,
 receiveUpdatesAsInvalidates)` :
   
   ```
   public static int getInterestLookupIndex(boolean isDurable, boolean 
receiveUpdatesAsInvalidates) {
 if (isDurable) {
   if (receiveUpdatesAsInvalidates) {
 return durableInterestListIndexForUpdatesAsInvalidates;
   } else {
 return durableInterestListIndex;
   }
 } else {
   if (receiveUpdatesAsInvalidates) {
 return interestListIndexForUpdatesAsInvalidates;
   } else {
 return interestListIndex;
   }
 }
   }
   ```
   It filters Interest lists based on the durable flag. So in 
`recoverAllInterestTypes` function this would result that the non-durable 
Interests are recovered first, and then the durable (If durable client identity 
is configured).
   
   My solution was to simply follow this approach with CQs and to first 
register non-durable CQs, and then durable. I have used `CqQuery.isDurable() 
`flag since it is set with following API:
   
   `newCq(String queryString, CqAttributes cqAttr, **boolean isDurable**)`
   
   I think that the code you proposed would not register **non-durable** 
Interests, and would register only **durable** Interests in case durable client 
is configured. Additionally in case of durable client, it would register all 
CQs as **durable** (even those that are configured as non-durable). I will try 
to perform some test with the solution you proposed, since there are couple 
cases it will take some time.
   
   BR/Jakov





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [geode] jvarenina commented on a change in pull request #5360: GEODE-8329: Fix for durable CQ reqistration recovery

2020-07-10 Thread GitBox


jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r452734828



##
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
 .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
   }
   try {
-if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
   Hi @agingade ,
   
   Thanks for your comments!
   
   My understanding was that it is possible to configure durable client 
identity with the combination of non-durable and durable CQs/Interests. For 
Example the parameters and API used to configure durable client identity, CQs 
and region Interests :
   
   ```
   // API
   registerInterestForAllKeys(InterestResultPolicy policy, boolean isDurable)
   newCq(String queryString, CqAttributes cqAttr, boolean isDurable)
   // Parameters
   durable-client-id, durable-client-timeout
   ```
   
   If you look at the `recoverAllInterestTypes` function code with this in 
mind, then the code there make sense:
  
   -  First register non-durable CQs and Interests
   -  In case of durable client identity, then also register durable CQs and 
Interests (Not possible to have durable CQ/Interests without durable client 
identity)
   
   Also when we look deeper at function level in the 
`recoverInterestList->recoverSingleList->getRegionToInterestsMap->getInterestLookupIndex(isDurable,
 receiveUpdatesAsInvalidates)` :
   
   ```
   public static int getInterestLookupIndex(boolean isDurable, boolean 
receiveUpdatesAsInvalidates) {
 if (isDurable) {
   if (receiveUpdatesAsInvalidates) {
 return durableInterestListIndexForUpdatesAsInvalidates;
   } else {
 return durableInterestListIndex;
   }
 } else {
   if (receiveUpdatesAsInvalidates) {
 return interestListIndexForUpdatesAsInvalidates;
   } else {
 return interestListIndex;
   }
 }
   }
   ```
   It filters Interest lists based on the durable flag. So in 
`recoverAllInterestTypes` function this would result that the non-durable 
Interests are recovered first, and then the durable (If durable client identity 
is configured).
   
   My solution was to simply follow this approach with CQs and to first 
register non-durable CQs, and then durable. I have used `CqQuery.isDurable() 
`flag since it is set with following API:
   
   `newCq(String queryString, CqAttributes cqAttr, **boolean isDurable**)`
   
   I think that the code you proposed would not register **non-durable** 
Interests, and would register only **durable** Interests in case durable client 
is configured. Additionally in case of durable client, it would register all 
CQs as **durable** (even those that are configured as non-durable). I will try 
to perform some test with the solution you proposed, since there are couple 
cases.
   
   BR/Jakov





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org