captainzmc commented on a change in pull request #2782:
URL: https://github.com/apache/ozone/pull/2782#discussion_r739940860



##########
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/ContainerStateMachine.java
##########
@@ -515,8 +515,11 @@ private ContainerCommandResponseProto runCommand(
 
         ContainerCommandResponseProto response = runCommand(
             requestProto, context);
-        String path = response.getMessage();
-        return new LocalStream(new StreamDataChannel(Paths.get(path)));
+        final StreamDataChannel channel = new StreamDataChannel(
+            Paths.get(response.getMessage()));
+        final ExecutorService chunkExecutor = requestProto.hasWriteChunk() ?
+            getChunkExecutor(requestProto.getWriteChunk()) : null;
+        return new LocalStream(channel, chunkExecutor);

Review comment:
       Hi @szetszwo, The size of the chunkExecutors is determined by 
dfs.container.ratis.num.write.chunk.threads.per.volume.  So I'm wondering if we 
need delete  [datastream.write.threads in 
DatanodeRatisServerConfig](https://github.com/apache/ozone/blob/HDDS-4454/hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/conf/DatanodeRatisServerConfig.java#L144)?
 Now that we're passing in chunkExecutor every time, this configuration should 
be useless?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to