Github user jinxing64 commented on the issue:

    https://github.com/apache/spark/pull/21342
  
    Thanks a lot for looking into this.
    
    The issue is that, sometimes user would configure 
`spark.sql.broadcastTimeout` as bigger value, because the `relationFuture` in 
`BroadcastExchangeExec` could cost more time.  When OOM
    happens, it makes no sense user should wait until timeout.
    
    Yes, there is implicit behavior changes in this pr. In current change we 
throw a `SparkException` and eat the `OutOfMemoryError`. But in existing code, 
when OOM happens, `java.util.concurrent.TimeoutException` is thrown but not 
`OutOfMemoryError`.
    
    If we really want to throw a `OutOfMemoryError` here, can we do below steps?
    1. When OOM, catch in future and throw `SparkException`
    2. relationFuture onFailure {
          case t: SparkException => throw OutOfMemoryError()
        }


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to