aakshintala commented on code in PR #42321:
URL: https://github.com/apache/spark/pull/42321#discussion_r1283327336


##########
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/config/Connect.scala:
##########
@@ -49,9 +49,9 @@ object Connect {
   val CONNECT_GRPC_ARROW_MAX_BATCH_SIZE =
     ConfigBuilder("spark.connect.grpc.arrow.maxBatchSize")
       .doc(
-        "When using Apache Arrow, limit the maximum size of one arrow batch 
that " +
-          "can be sent from server side to client side. Currently, we 
conservatively use 70% " +
-          "of it because the size is not accurate but estimated.")
+        "When using Apache Arrow, limit the maximum size of one arrow batch, 
in MiB unless " +
+          "otherwise specified, that can be sent from server side to client 
side. Currently, we " +
+          "conservatively use 70% of it because the size is not accurate but 
estimated.")
       .version("3.4.0")
       .bytesConf(ByteUnit.MiB)

Review Comment:
   Bytes would just be better. No need to convert later. This confit returning 
4 is a fairly surprising sharp edge (although it makes sense in hindsight).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to