Github user e-dorigatti commented on a diff in the pull request:
https://github.com/apache/spark/pull/21383#discussion_r189872060
--- Diff: python/pyspark/shuffle.py ---
@@ -67,6 +67,19 @@ def get_used_memory():
return 0
+def safe_iter(f):
+ """ wraps f to make it safe (= does not lead to data loss) to use
inside a for loop
+ make StopIteration's raised inside f explicit
+ """
+ def wrapper(*args, **kwargs):
+ try:
+ return f(*args, **kwargs)
+ except StopIteration as exc:
+ raise RuntimeError('StopIteration in client code', exc)
--- End diff --
Most of the times, the user function is called within a loop (see e.g.
`RDD.foreach`
[here](https://github.com/e-dorigatti/spark/blob/ec7854a8504ec08485b3536ea71483cce46f9500/python/pyspark/rdd.py#L799)).
If a `StopIteration` happens inside that function, pyspark will exit the loop,
and will not process the items remaining in that particular partition. This
means that some data will _disappear_, leaving no trace. See [this
issue](https://issues.apache.org/jira/browse/SPARK-24034) for a code example
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]