dillitz commented on code in PR #41829:
URL: https://github.com/apache/spark/pull/41829#discussion_r1254234636


##########
connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/ArtifactManager.scala:
##########
@@ -51,13 +51,14 @@ import org.apache.spark.util.{SparkFileUtils, 
SparkThreadUtils}
 class ArtifactManager(
     userContext: proto.UserContext,
     sessionId: String,
-    channel: ManagedChannel) {
+    channel: ManagedChannel,
+    retryPolicy: GrpcRetryHandler.RetryPolicy) {
   // Using the midpoint recommendation of 32KiB for chunk size as specified in
   // https://github.com/grpc/grpc.github.io/issues/371.
   private val CHUNK_SIZE: Int = 32 * 1024
 
-  private[this] val stub = proto.SparkConnectServiceGrpc.newStub(channel)
-  private[this] val bstub = 
proto.SparkConnectServiceGrpc.newBlockingStub(channel)
+  private[this] val stub = new CustomSparkConnectStub(channel, retryPolicy)
+  private[this] val bstub = new CustomSparkConnectBlockingStub(channel, 
retryPolicy)

Review Comment:
   I am just not sure how much sense this makes from a design perspective with 
all those [addArtifact(s) 
methods](https://github.com/apache/spark/blob/ae3defcba23632a438ff646fcca13901cb53ec3f/connector/connect/client/jvm/src/main/scala/org/apache/spark/sql/connect/client/SparkConnectClient.scala#L207C8-L207C8)
 we currently have in the client. We could pass the stubs to the 
artifactManager from the client.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to