OneSizeFitsQuorum commented on code in PR #9768:
URL: https://github.com/apache/iotdb/pull/9768#discussion_r1186067388


##########
server/src/main/java/org/apache/iotdb/db/service/thrift/impl/DataNodeInternalRPCServiceImpl.java:
##########
@@ -289,18 +290,25 @@ public TSendFragmentInstanceResp 
sendFragmentInstance(TSendFragmentInstanceReq r
   }
 
   @Override
-  public TSendPlanNodeResp sendPlanNode(TSendPlanNodeReq req) {
-    LOGGER.debug("receive PlanNode to group[{}]", req.getConsensusGroupId());
-    ConsensusGroupId groupId =
-        
ConsensusGroupId.Factory.createFromTConsensusGroupId(req.getConsensusGroupId());
-    PlanNode planNode = PlanNodeType.deserialize(req.planNode.body);
-    RegionWriteExecutor executor = new RegionWriteExecutor();
-    TSendPlanNodeResp resp = new TSendPlanNodeResp();
-    RegionExecutionResult executionResult = executor.execute(groupId, 
planNode);
-    resp.setAccepted(executionResult.isAccepted());
-    resp.setMessage(executionResult.getMessage());
-    resp.setStatus(executionResult.getStatus());
-    return resp;
+  public TSendBatchPlanNodeResp sendBatchPlanNode(TSendBatchPlanNodeReq req) {
+    List<TSendSinglePlanNodeResp> responses =
+        req.getRequests().stream()

Review Comment:
   This may increase a little latency in low-concurrency scenarios, but I think 
it is reasonable.
   
   For a write request, does the server need to use parallel techniques to 
reduce latency? To this question, I think the most important thing is to 
provide a coordinated and unified answer. If parallelism is not used, the delay 
cannot be reduced at this time, but it can ensure that the request consumes at 
most one thread on one node to execute the request. To improve the throughput, 
the user can directly increase the concurrency. If parallelism is used, then 
the request should be able to use multiple threads on any related node to 
reduce latency, although there may be some performance costs when concurrency 
is high.
   
   However, the code in the current dispatch phase is not clear. If you look at 
the code 
[here](https://github.com/apache/iotdb/blob/master/server/src/main/java/org/apache/iotdb/db/mpp/plan/scheduler/FragmentInstanceDispatcherImpl.java#L185-L202),
 it is currently executing in serial for localInstance, while executing in 
parallel for remoteInstance, which I think is very strange. It does not provide 
users with a coherent answer.
   
   Perhaps ideally, both localInstance and remoteInstance should be executed in 
parallel, so that there is a coherent answer. But this would introduce a thread 
pool, which I don't really want to do right now.
   
   In summary, I currently think it is a better and clearer choice to provide a 
non-parallel semantics for write requests that users can understand and adjust 
concurrency to improve throughput. After we reduce the number of thread pools 
and are able to do some scheduling in the application state, perhaps we can 
parallel both local and remote execution, allowing latency to fall further in 
low-concurrency scenarios.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to