szetszwo commented on code in PR #6613:
URL: https://github.com/apache/ozone/pull/6613#discussion_r2466555548
##########
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java:
##########
@@ -504,6 +516,65 @@ private XceiverClientReply sendCommandWithRetry(
}
}
+ /**
+ * Starts a streaming read operation, intended to read entire blocks from
the datanodes. This method expects a
+ * {@link StreamObserver} to be passed in, which will be used to receive the
streamed data from the datanode.
+ * Upon successfully starting the streaming read, a {@link
StreamingReadResponse} is returned, which contains
+ * information about the datanode used for the read, and the request
observer that can be used to manage the stream
+ * (e.g., to cancel it if needed). A semaphore is acquired to limit the
number of concurrent streaming reads so upon
+ * successful return of this method, the caller must ensure to call {@link
#completeStreamRead(StreamingReadResponse)}
+ * to release the semaphore once the streaming read is complete.
+ * @param request The container command request to initiate the streaming
read.
+ * @param streamObserver The observer that will handle the streamed
responses.
+ * @return A {@link StreamingReadResponse} containing details of the
streaming read operation.
+ * @throws IOException
+ * @throws InterruptedException
+ */
+ @Override
+ public StreamingReadResponse streamRead(ContainerCommandRequestProto request,
+ StreamObserver<ContainerCommandResponseProto> streamObserver) throws
IOException, InterruptedException {
+ List<DatanodeDetails> datanodeList = sortDatanodes(request);
+ IOException lastException = null;
+ for (DatanodeDetails dn : datanodeList) {
+ try {
+ checkOpen(dn);
+ semaphore.acquire();
+ XceiverClientProtocolServiceStub stub = asyncStubs.get(dn.getID());
+ if (stub == null) {
+ throw new IOException("Failed to get gRPC stub for DataNode: " + dn);
+ }
+ if (LOG.isDebugEnabled()) {
+ LOG.debug("Executing command {} on datanode {}",
processForDebug(request), dn);
+ }
+ StreamObserver<ContainerCommandRequestProto> requestObserver = stub
+ .withDeadlineAfter(timeout, TimeUnit.SECONDS)
+ .send(streamObserver);
+ requestObserver.onNext(request);
+ requestObserver.onCompleted();
+ return new StreamingReadResponse(dn,
(ClientCallStreamObserver<ContainerCommandRequestProto>) requestObserver);
Review Comment:
> ... HDFS model works well and results in relatively simple code in the
approach used here ...
I agree that HDFS model works well but the problem is that this approach is
not the HDFS model:
- HDFS: Blocking API (Socket) + Blocking threads
- This approach: Non-blocking API (gRPC) + Blocking threads
- Suggested approach: Non-blocking API (gRPC) + Non-blocking threads
> Of course we can cache open files, ...
I guess you mean the server side? We don't have to. The server should open
the file in the first gRPC onNext() call and close it in onComplete()/onError().
> ... we agreed the handler pool would need to be greatly increased. I don't
see any big problem with having a very large handler pool of threads that are
mostly blocked. ...
Increasing handler pool for performance makes sense. However, it is not a
good solution to avoid deadlocks.
BTW, how large is enough? Would it slows down other operations for creating
such a large handler pool?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]