Github user viirya commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6454#discussion_r31290696
  
    --- Diff: core/src/main/scala/org/apache/spark/rdd/CartesianRDD.scala ---
    @@ -72,8 +74,21 @@ class CartesianRDD[T: ClassTag, U: ClassTag](
     
       override def compute(split: Partition, context: TaskContext): 
Iterator[(T, U)] = {
         val currSplit = split.asInstanceOf[CartesianPartition]
    +
    +    val key = RDDBlockId(rdd2.id, currSplit.s2.index)
    +    val updatedBlocks = new ArrayBuffer[(BlockId, BlockStatus)]
    +    def cachedValues(): Iterator[U] = {
    +      SparkEnv.get.blockManager.memoryStore.unrollSafely(key,
    +        rdd2.iterator(currSplit.s2, context), updatedBlocks) match {
    +          case Left(arr) =>
    +            arr.iterator.asInstanceOf[Iterator[U]]
    +          case Right(it) =>
    +            it.asInstanceOf[Iterator[U]]
    +        }
    +    }
    +
         for (x <- rdd1.iterator(currSplit.s1, context);
    -         y <- rdd2.iterator(currSplit.s2, context)) yield (x, y)
    +         y <- new InterruptibleIterator(context, cachedValues())) yield 
(x, y)
    --- End diff --
    
    @squito Yes. I noticed that but have not committed updates yet. Interesting 
is, the performance gap before and after applying the remotely caching approach 
#5572, seems to be pulled closer in latest codebase. Because this issue is 
there for a while, I wonder if other improvement already makes this update not 
important on performance. I will test it again to see if this is correct. If 
so, this two PRs can be closed.
    
    @tbertelsen Can you try latest Spark codebase for `RDD.cartesian` 
performance too?
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to