mccheah commented on issue #25602: [SPARK-28613][SQL] Add config option for 
limiting uncompressed result size in SQL
URL: https://github.com/apache/spark/pull/25602#issuecomment-541654539
 
 
   Agree with @dvogelbacher here. The existing configuration only constrains 
compressed size, and not uncompressed.
   
   I think there's possibly another question to consider here: Should 
`spark.driver.maxResultSize` be accounting in terms of uncompressed size in the 
first place? I think there could be an argument for saying the configuration 
should originally have constrained the uncompressed size - since of course 
that's ultimately what ends up being stored in memory.
   
   But to avoid creating possible semantic breaks in the existing 
configuration, adding a secondary configuration seems like a reasonable 
compromise. Or, we can change the behavior of the existing configuration and 
ship it as a breaking change in Spark 3.0.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to