GitHub user wzhfy opened a pull request:

    https://github.com/apache/spark/pull/19743

    [SPARK-22515] [SQL] Estimation relation size based on numRows * rowSize

    ## What changes were proposed in this pull request?
    
    Currently, relation size is computed as the sum of file size, which is 
error-prone because storage format like parquet may have a much smaller file 
size compared to in-memory size. When we choose broadcast join based on file 
size, there's a risk of OOM. But if the number of rows is available in 
statistics, we can get a better estimation by `numRows * rowSize`, which helps 
to alleviate this problem.
    
    ## How was this patch tested?
    
    Added a new test case for data source table and hive table.


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/wzhfy/spark better_leaf_size

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/19743.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #19743
    
----
commit cc9ecc68901de5bcaa082021d867ecb79d0ae00c
Author: Zhenhua Wang <[email protected]>
Date:   2017-11-14T09:06:17Z

    relation estimation

----


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to