Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/1526#issuecomment-49822452
@pwendell The confusing part is the definition of "partitioning". It could
be the indexing of partitions or the partitioner. The first time I found that
`mapPartitions` has a parameter called `perservesPartitioning`, I thought this
is to preserve the indexing of partitions --- so partition 0 maps to partition
0 and partition 1 maps to partition 1, etc.
This causes problems. For example, we set `preservesPartitioning` to `true`
in `RDD.zip` and the following code won't run correctly:
~~~
val a = sc.makeRDD(Seq(0, 1, 2, 3, 4)).map(x => (x, 1)).partitionBy(new
HashPartitioner(2))
val b = a.map(x => 1)
a.zip(b).join(a.map(x => (x, 1)).collect()
~~~
Btw, `preservePartitioning` is used in streaming instead of
`preservesPartitioning`. @tdas
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---