ConeyLiu commented on a change in pull request #26239: [SPARK-29582][PYSPARK]
Unify the behavior of pyspark.TaskContext with spark core
URL: https://github.com/apache/spark/pull/26239#discussion_r339888616
##########
File path: python/pyspark/worker.py
##########
@@ -596,6 +599,10 @@ def process():
profiler.profile(process)
else:
process()
+
+ # reset task context to None
+ TaskContext._setTaskContext(None)
+ BarrierTaskContext._setTaskContext(None)
Review comment:
> Hm, what happens if it fails with exceptions in the middle of execution in
this worker?
If exceptions occured, the worker will be closed with `sys.exit(-1)`.
> Is it really needed? We always set the global TaskContext and never reset
it previouslly.
Previously:
```
val rdd = ...
val barriered = rdd.barrier().mapPartitions(...)
barriered.mapPartitions(...) # here the BarrierTaskContext still existed.
```
This code is just a guard program, it shouldn't increase extra overhead or
behavior change.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]