[ 
https://issues.apache.org/jira/browse/HDFS-17558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17866561#comment-17866561
 ] 

ASF GitHub Bot commented on HDFS-17558:
---------------------------------------

simbadzina commented on code in PR #6902:
URL: https://github.com/apache/hadoop/pull/6902#discussion_r1680186166


##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestObserverWithRouter.java:
##########
@@ -618,6 +618,7 @@ public void testRouterResponseHeaderState() {
     Configuration conf = new Configuration();
     conf.setBoolean(RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_DEFAULT_KEY, true);
     conf.set(RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_OVERRIDES, "ns1");
+    
conf.setInt(RBFConfigKeys.DFS_ROUTER_OBSERVER_FEDERATED_STATE_PROPAGATION_MAXSIZE,
 1);

Review Comment:
   Can you set this up in a separate unit test. There is already another 
parameter `RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_OVERRIDES` being tested and 
the interaction was hard to see at first.
   
   Maybe something like the following
   ```java
     @Test
     @Tag(SKIP_BEFORE_EACH_CLUSTER_STARTUP)
     public void testRouterResponseHeaderStateMaxSizeLimit() {
       Configuration conf = new Configuration();
       conf.setBoolean(RBFConfigKeys.DFS_ROUTER_OBSERVER_READ_DEFAULT_KEY, 
true);
       
conf.setInt(RBFConfigKeys.DFS_ROUTER_OBSERVER_FEDERATED_STATE_PROPAGATION_MAXSIZE,
 1);
   
       RouterStateIdContext routerStateIdContext = new 
RouterStateIdContext(conf);
   
       ConcurrentHashMap<String, LongAccumulator> namespaceIdMap =
           routerStateIdContext.getNamespaceIdMap();
       namespaceIdMap.put("ns0", new LongAccumulator(Math::max, 10));
       namespaceIdMap.put("ns1", new LongAccumulator(Math::max, 
Long.MIN_VALUE));
   
       RpcHeaderProtos.RpcResponseHeaderProto.Builder responseHeaderBuilder =
           RpcHeaderProtos.RpcResponseHeaderProto
               .newBuilder()
               .setCallId(1)
               
.setStatus(RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto.SUCCESS);
       routerStateIdContext.updateResponseState(responseHeaderBuilder);
   
       Map<String, Long> latestFederateState = 
RouterStateIdContext.getRouterFederatedStateMap(
           responseHeaderBuilder.build().getRouterFederatedState());
       // Validate that ns0 is still part of the header
       Assertions.assertEquals(1, latestFederateState.size());
   
       namespaceIdMap.put("ns2", new LongAccumulator(Math::max, 20));
       // Rebuild header
       responseHeaderBuilder =
           RpcHeaderProtos.RpcResponseHeaderProto
               .newBuilder()
               .setCallId(1)
               
.setStatus(RpcHeaderProtos.RpcResponseHeaderProto.RpcStatusProto.SUCCESS);
       routerStateIdContext.updateResponseState(responseHeaderBuilder);
       latestFederateState = RouterStateIdContext.getRouterFederatedStateMap(
           responseHeaderBuilder.build().getRouterFederatedState());
       // Validate that ns0 is still part of the header
       Assertions.assertEquals(0, latestFederateState.size());
     }
   ```





> RBF: Make maxSizeOfFederatedStateToPropagate work on setResponseHeaderState.
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-17558
>                 URL: https://issues.apache.org/jira/browse/HDFS-17558
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: rbf
>            Reporter: fuchaohong
>            Priority: Major
>
> When the size of namespaceIdMap exceeds 
> RBFConfigKeys.DFS_ROUTER_OBSERVER_FEDERATED_STATE_PROPAGATION_MAXSIZE, the 
> federated state does not propagate. This behavior is inconsistent with the 
> configuration description, which states that the size of the federated state 
> propagated to the client should be limited.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to