Hi all,

I'm reading the implementation of the shuffle in Spark. 
My understanding is that it's not overlapping with upstream stage.

Is it helpful to overlap the computation of upstream stage w/ the shuffle (I
mean the network copy, like in Hadoop)? If it is, is there any plan to
implement it in the any version?

--Z



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/Shuffle-overlapping-tp7902.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to