Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/1791#discussion_r15916792
--- Diff: python/pyspark/rdd.py ---
@@ -1684,11 +1812,57 @@ def zip(self, other):
>>> x.zip(y).collect()
[(0, 1000), (1, 1001), (2, 1002), (3, 1003), (4, 1004)]
"""
+ if self.getNumPartitions() != other.getNumPartitions():
+ raise ValueError("the number of partitions dose not match"
+ " with each other")
+
pairRDD = self._jrdd.zip(other._jrdd)
deserializer = PairDeserializer(self._jrdd_deserializer,
other._jrdd_deserializer)
return RDD(pairRDD, self.ctx, deserializer)
+ def zipPartitions(self, other, f, preservesPartitioning=False):
+ """
+ Zip this RDD's partitions with one (or more) RDD(s) and return a
+ new RDD by applying a function to the zipped partitions.
+
+ Not implemented.
+ """
+ raise NotImplementedError
+
+ def zipWithIndex(self):
+ """
+ Zips this RDD with its element indices.
--- End diff --
The Scala documentation is much more descriptive about what this method
does:
```scala
/**
* Zips this RDD with its element indices. The ordering is first based on
the partition index
* and then the ordering of items within each partition. So the first
item in the first
* partition gets index 0, and the last item in the last partition
receives the largest index.
* This is similar to Scala's zipWithIndex but it uses Long instead of
Int as the index type.
* This method needs to trigger a spark job when this RDD contains more
than one partitions.
*/
def zipWithIndex(): RDD[(T, Long)] = new ZippedWithIndexRDD(this)
```
The Python documentation should explain these subtleties, too.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]