Github user tgravescs commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22112#discussion_r212305067
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
    @@ -855,16 +858,17 @@ abstract class RDD[T: ClassTag](
        * a map on the other).
        */
       def zip[U: ClassTag](other: RDD[U]): RDD[(T, U)] = withScope {
    -    zipPartitions(other, preservesPartitioning = false) { (thisIter, 
otherIter) =>
    -      new Iterator[(T, U)] {
    -        def hasNext: Boolean = (thisIter.hasNext, otherIter.hasNext) match 
{
    -          case (true, true) => true
    -          case (false, false) => false
    -          case _ => throw new SparkException("Can only zip RDDs with " +
    -            "same number of elements in each partition")
    +    zipPartitionsInternal(other, preservesPartitioning = false, 
orderSensitiveFunc = true) {
    --- End diff --
    
    I thought we were only fixing repartition in this pr? 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to