LantaoJin commented on a change in pull request #27828: [SPARK-31068][SQL]
Avoid IllegalArgumentException in broadcast exchange
URL: https://github.com/apache/spark/pull/27828#discussion_r389755750
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/BroadcastExchangeExec.scala
##########
@@ -87,9 +87,12 @@ case class BroadcastExchangeExec(
val beforeCollect = System.nanoTime()
// Use executeCollect/executeCollectIterator to avoid conversion
to Scala types
val (numRows, input) = child.executeCollectIterator()
- if (numRows >= 512000000) {
+ // Since the maximum number of keys that BytesToBytesMap supports
is 1 << 29,
+ // and only 70% of the slots can be used before growing in
HashedRelation,
+ // here the limitation should not be over 341 million.
+ if (numRows >= (1 << 29) / 1.5) {
throw new SparkException(
- s"Cannot broadcast the table with 512 million or more rows:
$numRows rows")
+ s"Cannot broadcast the table with 341 million or more rows:
$numRows rows")
Review comment:
@maropu Do you think we need to explain why uses 351m instead of 512m in the
error message? I think it's just a limitation, whatever the value is. It just
telling user a job fails due to broadcasting an unexpected large table.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]