gaogaotiantian commented on code in PR #53306:
URL: https://github.com/apache/spark/pull/53306#discussion_r2628201586
##########
python/pyspark/daemon.py:
##########
@@ -174,15 +175,20 @@ def handle_sigterm(*args):
while True:
if poller is not None:
- ready_fds = [fd_reverse_map[fd] for fd, _ in poller.poll(1000)]
- else:
- try:
- ready_fds = select.select([0, listen_sock], [], [], 1)[0]
- except select.error as ex:
- if ex[0] == EINTR:
- continue
+ ready_fds = []
+ # Unlike select, poll timeout is in millis.
+ for fd, event in poller.poll(1000):
+ if event & (select.POLLIN | select.POLLHUP):
+ # Data can be read (for POLLHUP peer hang up, so reads
will return
+ # 0 bytes, in which case we want to break out - this
is consistent
+ # with how select behaves).
+ ready_fds.append(fd_reverse_map[fd])
else:
- raise
+ # Could be POLLERR or POLLNVAL (select would raise in
this case).
+ raise PySparkRuntimeError(f"Polling error - event
{event} on fd {fd}")
Review Comment:
I don't like `OSError` either. I think the general rule is to make all known
exceptions a spark error. We have the ability to add error class in the future.
Again, on driver side this is just a `PythonException` - also no `errorClass`
with it because it's not supported yet.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]