[ 
https://issues.apache.org/jira/browse/SPARK-27940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Rosen updated SPARK-27940:
-------------------------------
    Description: 
{{SubtractedRDD}}, which is used to implement {{RDD.subtract()}} and 
{{PairRDDFunctions.subtractByKey()}}, currently buffers one partition in memory 
and does not support spilling: 
[https://github.com/apache/spark/blob/v2.4.3/core/src/main/scala/org/apache/spark/rdd/SubtractedRDD.scala#L42]

In principle, we could implement {{subtractByKey}} as a left-outer join 
followed by a filter (e.g. as an antijoin), but the Scaladoc explains why this 
approach wasn't taken:
{code:java}
* It is possible to implement this operation with just `cogroup`, but
* that is less efficient because all of the entries from `rdd2`, for
* both matching and non-matching values in `rdd1`, are kept in the
* JHashMap until the end.{code}
For example, if we have {{left.subtractByKey(right)}} and {{right}} has 
hundreds of occurrences of a key then we'd end up buffering hundreds of tuples.

Instead, maybe we could implement a sort-merge join where we build an 
{{ExternalAppendOnlyMap}} of unique {{right}} keys, use an {{ExternalSorter}} 
to sort the {{left}}| input, then iterate over both sorted iterators and 
perform a merge.

Note that this problem only impacts the RDD API.

Here are some existing workarounds for this OOM-proneness:
 * Use more partitions: e.g. {{left.subtractByKey(right, 2000)}} (or pass in a 
custom partitioner).
 * Use a left join followed by filter: 
  {code:java}
left
  .leftOuterJoin(right)
  .collect { case (k, (lv, None)) => (k, lv) }{code}
If you wanted to further optimize, you could replace {{right}} values with 
dummy placeholders to avoid having to shuffle them:
  {code:java}
left
  .leftOuterJoin(right.map { case (k, v) => (k, 0) })
  .collect { case (k, (lv, None)) => (k, lv) }{code}

 * Use DataFrames / Datasets instead of RDDs.

  was:
{{SubtractedRDD}}, which is used to implement {{RDD.subtract()}} and 
{{PairRDDFunctions.subtractByKey()}}, currently buffers one partition in memory 
and does not support spilling: 
[https://github.com/apache/spark/blob/v2.4.3/core/src/main/scala/org/apache/spark/rdd/SubtractedRDD.scala#L42]

In principle, we could implement {{subtractByKey}} as a left-outer join 
followed by a filter (e.g. as an antijoin), but the Scaladoc explains why this 
approach wasn't taken:
{code:java}
* It is possible to implement this operation with just `cogroup`, but
* that is less efficient because all of the entries from `rdd2`, for
* both matching and non-matching values in `rdd1`, are kept in the
* JHashMap until the end.{code}
For example, if we have {{left.subtractByKey(right)}} and {{right}} has 
hundreds of occurrences of a key then we'd end up buffering hundreds of tuples.

Instead, maybe we could implement a sort-merge join where we build an 
{{ExternalAppendOnlyMap}} of unique {{right}} keys, use an {{ExternalSorter}} 
to sort the {{left}}| input, then iterate over both sorted iterators and 
perform a merge.

Note that this problem only impacts the RDD API.

Here are some existing workarounds for this OOM-proneness:
 * Use more partitions: e.g. {{left.subtractByKey(right, 2000)}} (or pass in a 
custom partitioner).
 * Use a left join followed by filter: 

{code:java}
left
  .leftOuterJoin(right)
  .collect { case (k, (lv, None)) => (k, lv) }{code}
If you wanted to further optimize, you could replace {{right}} values with 
dummy placeholders to avoid having to shuffle them:

{code:java}
left
  .leftOuterJoin(right.map { case (k, v) => (k, 0) })
  .collect { case (k, (lv, None)) => (k, lv) }{code}

 * Use DataFrames / Datasets instead of RDDs.


> SubtractedRDD is OOM-prone because it does not support spilling
> ---------------------------------------------------------------
>
>                 Key: SPARK-27940
>                 URL: https://issues.apache.org/jira/browse/SPARK-27940
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.4.0
>            Reporter: Josh Rosen
>            Priority: Minor
>
> {{SubtractedRDD}}, which is used to implement {{RDD.subtract()}} and 
> {{PairRDDFunctions.subtractByKey()}}, currently buffers one partition in 
> memory and does not support spilling: 
> [https://github.com/apache/spark/blob/v2.4.3/core/src/main/scala/org/apache/spark/rdd/SubtractedRDD.scala#L42]
> In principle, we could implement {{subtractByKey}} as a left-outer join 
> followed by a filter (e.g. as an antijoin), but the Scaladoc explains why 
> this approach wasn't taken:
> {code:java}
> * It is possible to implement this operation with just `cogroup`, but
> * that is less efficient because all of the entries from `rdd2`, for
> * both matching and non-matching values in `rdd1`, are kept in the
> * JHashMap until the end.{code}
> For example, if we have {{left.subtractByKey(right)}} and {{right}} has 
> hundreds of occurrences of a key then we'd end up buffering hundreds of 
> tuples.
> Instead, maybe we could implement a sort-merge join where we build an 
> {{ExternalAppendOnlyMap}} of unique {{right}} keys, use an {{ExternalSorter}} 
> to sort the {{left}}| input, then iterate over both sorted iterators and 
> perform a merge.
> Note that this problem only impacts the RDD API.
> Here are some existing workarounds for this OOM-proneness:
>  * Use more partitions: e.g. {{left.subtractByKey(right, 2000)}} (or pass in 
> a custom partitioner).
>  * Use a left join followed by filter: 
>   {code:java}
> left
>   .leftOuterJoin(right)
>   .collect { case (k, (lv, None)) => (k, lv) }{code}
> If you wanted to further optimize, you could replace {{right}} values with 
> dummy placeholders to avoid having to shuffle them:
>   {code:java}
> left
>   .leftOuterJoin(right.map { case (k, v) => (k, 0) })
>   .collect { case (k, (lv, None)) => (k, lv) }{code}
>  * Use DataFrames / Datasets instead of RDDs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to