viirya commented on code in PR #52747:
URL: https://github.com/apache/spark/pull/52747#discussion_r2493207797
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala:
##########
@@ -3958,6 +3958,20 @@ object SQLConf {
"than zero and less than INT_MAX.")
.createWithDefaultString("64MB")
+ val ARROW_EXECUTION_COMPRESSION_CODEC =
+ buildConf("spark.sql.execution.arrow.compressionCodec")
+ .doc("Compression codec used to compress Arrow IPC data when
transferring data " +
Review Comment:
I think no, it is currently applied on `toArrow` and `toPandas` which is on
the reported issue. It should be also available to arrow udf and pandas udf. I
will try to extend this to such cases.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]