Github user eyalfa commented on the issue:

    https://github.com/apache/spark/pull/21369
  
    well, I took the time trying to figure out how's the iterator is eventually 
being used,
    (most of) it boils down to 
`org.apache.spark.scheduler.ShuffleMapTask#runTask` which does:
    `writer.write(rdd.iterator(partition, context).asInstanceOf[Iterator[_ <: 
Product2[Any, Any]]])`
    looking at `org.apache.spark.shuffle.ShuffleWriter#write` implementations, 
it seems all of them first exhaust the iterator and then perform some kind of 
postprocessing: i.e. merging spills, sorting, writing partitions files and then 
concatanating them into a single file... bottom line the Iterator may actually 
be 'sitting' for some time after reaching EOF.
    I'll implement the 'simple approach' for this PR, but I think this deserves 
a separate JIRA issue + PR.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to