GitHub user vanzin opened a pull request:

    https://github.com/apache/spark/pull/15189

    [SPARK-17549][sql] Coalesce cached relation stats in driver.

    Currently there's a scalability problem with cached relations, in that
    stats for every column, for each partition, are captured in the driver.
    For large tables that leads to lots and lots of memory usage.
    
    This change modifies the accumulator used to capture stats in the
    driver to summarize the data as it arrives, instead of collecting
    everything and then summarizing it.
    
    Previously, for each column, the driver needed:
    
      (64 + 2 * sizeof(type)) * number of partitions
    
    With the change, the driver requires a fixed 8 bytes per column.
    
    On top of that, the change fixes a second problem dealing with how
    statistics of cached relations that share stats with another one
    (e.g. a cache projection of a cached relation) are calculated; previously,
    the data would be wrong since the accumulator data would be summarized
    based on the child output (while the data reflected the parent's output).
    Now the calculation is done based on how the child's output maps to the
    parent's output, yielding the correct size.
    
    Tested with the new unit test (which makes sure the calculated stats are
    correct), and by looking at the relation size in a heap dump.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/vanzin/spark SPARK-17549

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/15189.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #15189
    
----
commit 5b3a65a02210c696206546c43403867bcc9eb077
Author: Marcelo Vanzin <van...@cloudera.com>
Date:   2016-09-20T22:57:41Z

    [SPARK-17549][sql] Coalesce cached relation stats in driver.
    
    Currently there's a scalability problem with cached relations, in that
    stats for every column, for each partition, are captured in the driver.
    For large tables that leads to lots and lots of memory usage.
    
    This change modifies the accumulator used to capture stats in the
    driver to summarize the data as it arrives, instead of collecting
    everything and then summarizing it.
    
    Previously, for each column, the driver needed:
    
      (64 + 2 * sizeof(type)) * number of partitions
    
    With the change, the driver requires a fixed 8 bytes per column.
    
    On top of that, the change fixes a second problem dealing with how
    statistics of cached relations that share stats with another one
    (e.g. a cache projection of a cached relation) are calculated; previously,
    the data would be wrong since the accumulator data would be summarized
    based on the child output (while the data reflected the parent's output).
    Now the calculation is done based on how the child's output maps to the
    parent's output, yielding the correct size.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to