omalley commented on code in PR #4311:
URL: https://github.com/apache/hadoop/pull/4311#discussion_r965216043
##########
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RpcConstants.java:
##########
@@ -37,7 +37,9 @@ private RpcConstants() {
public static final int INVALID_RETRY_COUNT = -1;
-
+ // Special state ID value to indicate client request header has
routerFederatedState set.
Review Comment:
This is left over from the previous version of the patch, right?
##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java:
##########
@@ -349,6 +349,9 @@ public static ClientProtocol
createProxyWithAlignmentContext(
boolean withRetries, AtomicBoolean fallbackToSimpleAuth,
AlignmentContext alignmentContext)
throws IOException {
+ if (alignmentContext == null) {
Review Comment:
Why do we need to override the null value? The null value means that they
don't want to track alignment. Which call path lead to getting null here when
it shouldn't?
##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterStateIdContext.java:
##########
@@ -0,0 +1,105 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.lang.reflect.Method;
+import java.util.HashSet;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hdfs.protocol.ClientProtocol;
+import org.apache.hadoop.hdfs.server.namenode.ha.ReadOnly;
+import org.apache.hadoop.ipc.AlignmentContext;
+import org.apache.hadoop.ipc.RetriableException;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcRequestHeaderProto;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos.RpcResponseHeaderProto;
+
+
+/**
+ * This is the router implementation to hold the state Ids for all
+ * namespaces. This object is only updated by responses from NameNodes.
+ */
[email protected]
[email protected]
+class RouterStateIdContext implements AlignmentContext {
+
+ private final HashSet<String> coordinatedMethods;
+ private final FederatedNamespaceIds federatedNamespaceIds;
+ /**
+ * Size limit for the map of state Ids to send to clients.
+ */
+ private final int maxSizeOfFederatedStateToPropagate;
+
+ RouterStateIdContext(Configuration conf, FederatedNamespaceIds
federatedNamespaceIds) {
+ this.federatedNamespaceIds = federatedNamespaceIds;
+ this.coordinatedMethods = new HashSet<>();
+ // For now, only ClientProtocol methods can be coordinated, so only
checking
+ // against ClientProtocol.
+ for (Method method : ClientProtocol.class.getDeclaredMethods()) {
+ if (method.isAnnotationPresent(ReadOnly.class)
+ && method.getAnnotationsByType(ReadOnly.class)[0].isCoordinated()) {
+ coordinatedMethods.add(method.getName());
+ }
+ }
+
+ maxSizeOfFederatedStateToPropagate =
+
conf.getInt(RBFConfigKeys.DFS_ROUTER_OBSERVER_FEDERATED_STATE_PROPAGATION_MAXSIZE,
+
RBFConfigKeys.DFS_ROUTER_OBSERVER_FEDERATED_STATE_PROPAGATION_MAXSIZE_DEFAULT);
+ }
+
+ @Override
+ public void updateResponseState(RpcResponseHeaderProto.Builder header) {
Review Comment:
We should change the behavior instead of a global msync, that any requests
that come in without the federated state ids are sent to the active nn. That
can be done in a different jira/pr though.
##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java:
##########
@@ -368,8 +370,20 @@ private ConnectionContext
getConnection(UserGroupInformation ugi, String nsId,
connUGI = UserGroupInformation.createProxyUser(
ugi.getUserName(), routerUser);
}
+
Review Comment:
Couldn't we move this code inside of ConnectionManager.getConnection and
avoid changing the API to add the clientStateId?
##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/FederatedNamespaceIds.java:
##########
@@ -0,0 +1,78 @@
+/**
Review Comment:
I think this functionality should be merged into RouterStateIdContext, which
serves the same purpose and wraps the instance of this class in the Router.
##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/PoolAlignmentContext.java:
##########
@@ -0,0 +1,89 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import java.io.IOException;
+import org.apache.hadoop.hdfs.NamespaceStateId;
+import org.apache.hadoop.ipc.AlignmentContext;
+import org.apache.hadoop.ipc.protobuf.RpcHeaderProtos;
+
+
Review Comment:
Please add a javadoc that describes how this class is used.
##########
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java:
##########
@@ -252,21 +252,21 @@ public class RouterRpcServer extends AbstractService
implements ClientProtocol,
/**
* Construct a router RPC server.
*
- * @param configuration HDFS Configuration.
+ * @param conf HDFS Configuration.
* @param router A router using this RPC server.
* @param nnResolver The NN resolver instance to determine active NNs in HA.
- * @param fileResolver File resolver to resolve file paths to subclusters.
+ * @param fResolver File resolver to resolve file paths to subclusters.
* @throws IOException If the RPC server could not be created.
*/
- public RouterRpcServer(Configuration configuration, Router router,
- ActiveNamenodeResolver nnResolver, FileSubclusterResolver fileResolver)
+ public RouterRpcServer(Configuration conf, Router router,
+ ActiveNamenodeResolver nnResolver, FileSubclusterResolver fResolver)
Review Comment:
Why did you change the name? fileResolver is easier to read.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]