[ 
https://issues.apache.org/jira/browse/SPARK-13021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15118274#comment-15118274
 ] 

Apache Spark commented on SPARK-13021:
--------------------------------------

User 'JoshRosen' has created a pull request for this issue:
https://github.com/apache/spark/pull/10932

> Fail fast when custom RDD's violate RDD.partition's API contract
> ----------------------------------------------------------------
>
>                 Key: SPARK-13021
>                 URL: https://issues.apache.org/jira/browse/SPARK-13021
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>
> Spark's {{Partition}} and {{RDD.partitions}} APIs have a contract which 
> requires custom implementations of {{RDD.partitions}} to ensure that for all 
> {{x}}, {{rdd.partitions\(x).index == x}}; in other words, the {{index}} 
> reported by a repartition needs to match its position in the partitions array.
> If a custom RDD implementation violates this contract, then Spark has the 
> potential to become stuck in an infinite recomputation loop when recomputing 
> a subset of an RDD's partitions, since the tasks that are actually run will 
> not correspond to the missing output partitions that triggered the 
> recomputation. Here's a link to a notebook which demonstrates this problem: 
> https://rawgit.com/JoshRosen/e520fb9a64c1c97ec985/raw/5e8a5aa8d2a18910a1607f0aa4190104adda3424/Violating%2520RDD.partitions%2520contract.html
> In order to guard against this infinite loop behavior, I think that Spark 
> should fail-fast and refuse to compute RDDs' whose {{partitions}} violate the 
> API contract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to