ueshin commented on code in PR #53016:
URL: https://github.com/apache/spark/pull/53016#discussion_r2535657022
##########
python/pyspark/util.py:
##########
@@ -917,6 +918,67 @@ def default_api_mode() -> str:
return "classic"
+class _FaulthandlerHelper:
+ def __init__(self) -> None:
+ self._log_path: Optional[str] = None
+ self._log_file: Optional[TextIO] = None
+ self._periodic_dump = False
+
+ def start(self) -> None:
+ if self._log_path:
+ raise Exception("Fault handler is already registered. No second
registration allowed")
+ self._log_path = os.environ.get("PYTHON_FAULTHANDLER_DIR", None)
+ traceback_dump_interval_seconds = os.environ.get(
+ "PYTHON_TRACEBACK_DUMP_INTERVAL_SECONDS", None
+ )
+ if self._log_path:
+ self._log_path = os.path.join(self._log_path, str(os.getpid()))
+ self._log_file = open(self._log_path, "w")
+
+ faulthandler.enable(file=self._log_file)
+
+ if (
+ traceback_dump_interval_seconds is not None
+ and int(traceback_dump_interval_seconds) > 0
+ ):
+ self._periodic_dump = True
+
faulthandler.dump_traceback_later(int(traceback_dump_interval_seconds),
repeat=True)
Review Comment:
In the daemon mode, the worker will wait at `split_index = read_int(infile)
@ worker` or `check_python_version(infile) @ the others`, so this will be
almost always enabled and it unnecessarily dump the traceback while just
waiting for the next execution.
Can we delay setting this to the current place, or block the daemon not to
let the worker go into `main` until the next execution starts?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]