yl09099 commented on code in PR #1147:
URL: 
https://github.com/apache/incubator-uniffle/pull/1147#discussion_r1386583233


##########
client-spark/spark3/src/main/java/org/apache/spark/shuffle/writer/RssShuffleWriter.java:
##########
@@ -473,4 +492,61 @@ Map<Integer, Set<Long>> getPartitionToBlockIds() {
   public WriteBufferManager getBufferManager() {
     return bufferManager;
   }
+
+  private static ShuffleManagerClient createShuffleManagerClient(String host, 
int port)
+      throws IOException {
+    ClientType grpc = ClientType.GRPC;
+    // Host can be inferred from `spark.driver.bindAddress`, which would be 
set when SparkContext is
+    // constructed.
+    return 
ShuffleManagerClientFactory.getInstance().createShuffleManagerClient(grpc, 
host, port);
+  }
+
+  private RssException throwFetchFailedIfNecessary(Exception e) {

Review Comment:
   > You don't get my point. You can return void and throw the exception which 
you need.
   
   Understood, but here the writing is written with reference to read failure, 
if you want to change should be changed at the same time, to keep the code 
uniform. Same as above, the next PR is solving this problem?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to