vicennial commented on code in PR #42321:
URL: https://github.com/apache/spark/pull/42321#discussion_r1283200814


##########
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/execution/SparkConnectPlanExecution.scala:
##########
@@ -100,7 +100,8 @@ private[execution] class 
SparkConnectPlanExecution(executeHolder: ExecuteHolder)
     val maxRecordsPerBatch = spark.sessionState.conf.arrowMaxRecordsPerBatch
     val timeZoneId = spark.sessionState.conf.sessionLocalTimeZone
     // Conservatively sets it 70% because the size is not accurate but 
estimated.
-    val maxBatchSize = 
(SparkEnv.get.conf.get(CONNECT_GRPC_ARROW_MAX_BATCH_SIZE) * 0.7).toLong
+    val maxBatchSize =

Review Comment:
   That makes sense, yes. However, I am not super familiar with testing the 
arrow code path particularly with generating batches of a specific size (in 
order to trigger the limit). Perhaps, just a large range query should be 
sufficient? I'll try this our but pointers are appreciated



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to