GitHub user JoshRosen opened a pull request:

    https://github.com/apache/spark/pull/10932

    [SPARK-13021][CORE] Fail fast when custom RDDs violate RDD.partition's API 
contract

    Spark's `Partition` and `RDD.partitions` APIs have a contract which 
requires custom implementations of `RDD.partitions` to ensure that for all `x`, 
`rdd.partitions(x).index == x`; in other words, the `index` reported by a 
repartition needs to match its position in the partitions array.
    
    If a custom RDD implementation violates this contract, then Spark has the 
potential to become stuck in an infinite recomputation loop when recomputing a 
subset of an RDD's partitions, since the tasks that are actually run will not 
correspond to the missing output partitions that triggered the recomputation. 
Here's a link to a notebook which demonstrates this problem: 
https://rawgit.com/JoshRosen/e520fb9a64c1c97ec985/raw/5e8a5aa8d2a18910a1607f0aa4190104adda3424/Violating%2520RDD.partitions%2520contract.html
    
    In order to guard against this infinite loop behavior, this patch modifies 
Spark so that it fails fast and refuses to compute RDDs' whose `partitions` 
violate the API contract.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/JoshRosen/spark SPARK-13021

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/10932.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #10932
    
----
commit 10efe2e8b9889c3714740d63794b6e5a8a0d355b
Author: Josh Rosen <[email protected]>
Date:   2016-01-26T23:29:59Z

    [SPARK-13021][CORE] Fail fast when custom RDDs violate RDD.partition's API 
contract
    
    Spark's `Partition` and `RDD.partitions` APIs have a contract which 
requires custom implementations of `RDD.partitions` to ensure that for all `x`, 
`rdd.partitions(x).index == x`; in other words, the `index` reported by a 
repartition needs to match its position in the partitions array.
    
    If a custom RDD implementation violates this contract, then Spark has the 
potential to become stuck in an infinite recomputation loop when recomputing a 
subset of an RDD's partitions, since the tasks that are actually run will not 
correspond to the missing output partitions that triggered the recomputation. 
Here's a link to a notebook which demonstrates this problem: 
https://rawgit.com/JoshRosen/e520fb9a64c1c97ec985/raw/5e8a5aa8d2a18910a1607f0aa4190104adda3424/Violating%2520RDD.partitions%2520contract.html
    
    In order to guard against this infinite loop behavior, I think that Spark 
should fail-fast and refuse to compute RDDs' whose `partitions` violate the API 
contract.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to