Github user JoshRosen commented on a diff in the pull request:

    https://github.com/apache/spark/pull/2092#discussion_r16633420
  
    --- Diff: python/pyspark/rdd.py ---
    @@ -1715,6 +1715,52 @@ def batch_as(rdd, batchSize):
                                             other._jrdd_deserializer)
             return RDD(pairRDD, self.ctx, deserializer)
     
    +    def zipWithIndex(self):
    +        """
    +        Zips this RDD with its element indices.
    +
    +        The ordering is first based on the partition index and then the
    +        ordering of items within each partition. So the first item in
    +        the first partition gets index 0, and the last item in the last
    +        partition receives the largest index.
    +
    +        This method needs to trigger a spark job when this RDD contains
    +        more than one partitions.
    +
    +        >>> sc.parallelize(range(4), 2).zipWithIndex().collect()
    +        [(0, 0), (1, 1), (2, 2), (3, 3)]
    --- End diff --
    
    This isn't the best example because it's not clear which element is the 
item and which element is its index.  In the Scala API, this is clear from the 
method's return type.  Maybe we should update the documentation to explicitly 
state that the second element is the id (like the Scala API).
    
    I think this implementation has things backwards w.r.t. the Scala one:
    
    ```python
    >>> sc.parallelize(['a', 'b', 'c', 'd'], 2).zipWithIndex().collect()
    [(0, 'a'), (1, 'b'), (2, 'c'), (3, 'd')]
    ```
    
    versus
    
    ```scala
    scala> sc.parallelize(Seq('a', 'b', 'c', 'd')).zipWithIndex().collect()
    res0: Array[(Char, Long)] = Array((a,0), (b,1), (c,2), (d,3))
    ```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to