GitHub user RadixSeven opened a pull request:

    https://github.com/apache/spark/pull/3936

    Document that groupByKey will OOM for large keys

    This pull request is my own work and I license it under Spark's open-source 
license.
    
    This contribution is an improvement to the documentation. I documented that 
the maximum number of values per key for groupByKey is limited by available RAM 
(see [Datablox][datablox link] and [the spark mailing list][list link]).
    
    Just saying that better performance is available is not sufficient. 
Sometimes you need to do a group-by - your operation needs all the items 
available in order to complete. This warning explains the problem.
    
    [datablox link]: 
http://databricks.gitbooks.io/databricks-spark-knowledge-base/content/best_practices/prefer_reducebykey_over_groupbykey.html
    [list link]: 
http://apache-spark-user-list.1001560.n3.nabble.com/Understanding-RDD-GroupBy-OutOfMemory-Exceptions-tp11427p11466.html

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/RadixSeven/spark better-group-by-docs

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/3936.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3936
    
----
commit 238e81b926779ccf788d97c8a78306811dfbf2e9
Author: Eric Moyer <[email protected]>
Date:   2015-01-07T20:49:33Z

    Doc that groupByKey will OOM for large keys
    
    Documented that maximum number of values per key for groupByKey is limited 
by available RAM (see [Datablox][datablox link] and [the spark mailing 
list][list link]).
    
    Just saying that better performance is available is not sufficient. 
Sometimes you need to do a group-by - your operation needs all the items 
available in order to complete. This warning explains the problem.
    
    [datablox link]: 
http://databricks.gitbooks.io/databricks-spark-knowledge-base/content/best_practices/prefer_reducebykey_over_groupbykey.html
    [list link]: 
http://apache-spark-user-list.1001560.n3.nabble.com/Understanding-RDD-GroupBy-OutOfMemory-Exceptions-tp11427p11466.html

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to