jvarenina commented on a change in pull request #5360:
URL: https://github.com/apache/geode/pull/5360#discussion_r453489804



##########
File path: 
geode-core/src/main/java/org/apache/geode/cache/client/internal/QueueManagerImpl.java
##########
@@ -1112,7 +1112,8 @@ private void recoverCqs(Connection recoveredConnection, 
boolean isDurable) {
             .set(((DefaultQueryService) 
this.pool.getQueryService()).getUserAttributes(name));
       }
       try {
-        if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT) {
+        if (((CqStateImpl) cqi.getState()).getState() != CqStateImpl.INIT

Review comment:
       I have tested TC with redundancy configured and it seems that the 
recovery of CQs is done a differently in this case. The primary server sends 
within `InitialImageOperation$FilterInfoMessage` all releveant CQs information 
to the starting server. At the reception of the message the starting server 
then registers CQs as durable (so no problem in this case observed).
   
   **Primary server:**
   ```
   [debug 2020/07/10 13:30:54.916 CEST <P2P message reader for 
192.168.1.102(server3:31347)<v4>:41001 shared unordered uid=1 local port=53683 
remote port=45674> tid=0x57] Received message 
'InitialImageOperation$RequestFilterInfoMessage(region 
path='/_gfe_durable_client_with_id_AppCounters_1_queue'; 
sender=192.168.1.102(server3:31347)<v4>:41001; processorId=27)' from 
<192.168.1.102(server3:31347)<v4>:41001>
   ```
   
   **Starting server:**
   ```
   [debug 2020/07/10 13:30:54.916 CEST <Client Queue Initialization Thread 1> 
tid=0x48] Sending (InitialImageOperation$RequestFilterInfoMessage(region 
path='/_gfe_durable_client_with_id_AppCounters_1_queue'; 
sender=192.168.1.102(server3:31347)<v4>:41001; processorId=27)) to 1 peers 
([192.168.1.102(server1:30862)<v1>:41000]) via tcp/ip
   
   [debug 2020/07/10 13:30:54.918 CEST <P2P message reader for 
192.168.1.102(server1:30862)<v1>:41000 shared unordered uid=5 local port=52175 
remote port=46552> tid=0x30] Received message 
'InitialImageOperation$FilterInfoMessage processorId=27 from 
192.168.1.102(server1:30862)<v1>:41000; NON_DURABLE allKeys=0; allKeysInv=0; 
keysOfInterest=0; keysOfInterestInv=0; patternsOfInterest=0; 
patternsOfInterestInv=0; filtersOfInterest=0; filtersOfInterestInv=0; DURABLE 
allKeys=0; allKeysInv=0; keysOfInterest=0; keysOfInterestInv=0; 
patternsOfInterest=0; patternsOfInterestInv=0; filtersOfInterest=0; 
filtersOfInterestInv=0; cqs=1' from <192.168.1.102(server1:30862)<v1>:41000>
   
   [debug 2020/07/10 13:30:54.919 CEST <Pooled High Priority Message Processor 
3> tid=0x3d] Processing FilterInfo for proxy: 
CacheClientProxy[identity(192.168.1.102(31226:loner):45576:8b927d38,connection=1,durableAttributes=DurableClientAttributes[id=AppCounters;
 timeout=200]); port=57552; primary=false; version=GEODE 1.12.0] : 
InitialImageOperation$FilterInfoMessage processorId=27 from 
192.168.1.102(server1:30862)<v1>:41000; NON_DURABLE allKeys=0; allKeysInv=0; 
keysOfInterest=0; keysOfInterestInv=0; patternsOfInterest=0; 
patternsOfInterestInv=0; filtersOfInterest=0; filtersOfInterestInv=0; DURABLE 
allKeys=0; allKeysInv=0; keysOfInterest=0; keysOfInterestInv=0; 
patternsOfInterest=0; patternsOfInterestInv=0; filtersOfInterest=0; 
filtersOfInterestInv=0; cqs=1
   
   [debug 2020/07/10 13:30:54.944 CEST <Pooled High Priority Message Processor 
3> tid=0x3d] Server side query for the cq: randomTracker is: SELECT * FROM 
/example-region i where i > 70
   [debug 2020/07/10 13:30:54.944 CEST <Pooled High Priority Message Processor 
3> tid=0x3d] Added CQ to the base region: /example-region With key as: 
randomTracker__AppCounters
   [debug 2020/07/10 13:30:54.944 CEST <Pooled High Priority Message Processor 
3> tid=0x3d] Adding CQ into MatchingCQ map, CQName: randomTracker__AppCounters 
Number of matched querys are: 1
   [debug 2020/07/10 13:30:54.945 CEST <Pooled High Priority Message Processor 
3> tid=0x3d] Adding to CQ Repository. CqName : randomTracker ServerCqName : 
randomTracker__AppCounters
   [debug 2020/07/10 13:30:54.945 CEST <Pooled High Priority Message Processor 
3> tid=0x3d] Adding CQ randomTracker__AppCounters to this members FilterProfile.
   [debug 2020/07/10 13:30:54.945 CEST <Pooled High Priority Message Processor 
3> tid=0x3d] Successfully created CQ on the server. CqName : randomTracker
   ```
   
   I can attach full logs if you need. Also, I have found the the following 
comment in the client code:
   ```
   // Even though the new redundant queue will usually recover
   // subscription information (see bug #39014) from its initial
   // image provider, in bug #42280 we found that this is not always
   // the case, so clients must always register interest with the new
   // redundant server.
   if (recoverInterest) {
     recoverInterest(queueConnection, isFirstNewConnection);
   }
   ```
   It is stated here the there is possible case when redundant queue isn't 
recovered by `InitialImageOperation$FilterInfoMessage`, but I haven't been able 
to reproduce that case. Do you see any benefit in finding and creating TC for 
this scenario, since recovery of durable CQ is already tested with TC without 
redundancy?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to