Github user mridulm commented on the issue:

    https://github.com/apache/spark/pull/16574
  
    
    Couple of points :
    
    a) Can recomputation be expensive ? Unfortunately, yes if not used 
properly. For better or for worse, this has been the implementation in spark 
since early days - pre-0.5; and the costs are known. Particularly given Apache 
spark's ability to cache/checkpoint data, the assumption is that shuffle is 
more expensive. This might not hold anymore actually, given improvements since 
1.0 - but only redoing benchmarks will give a better picture.
    
    b) If we were to do a shuffle for cartesian, I would implement it 
differently - take a look at how Apache Pig has implemented it for a more 
efficient way to do it. (Btw, I dont think the impl in the PR actually works, 
but I have not looked at it in detail).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to