allisonwang-db commented on code in PR #53306:
URL: https://github.com/apache/spark/pull/53306#discussion_r2625173682


##########
python/pyspark/daemon.py:
##########
@@ -174,15 +175,20 @@ def handle_sigterm(*args):
 
         while True:
             if poller is not None:
-                ready_fds = [fd_reverse_map[fd] for fd, _ in poller.poll(1000)]
-            else:
-                try:
-                    ready_fds = select.select([0, listen_sock], [], [], 1)[0]
-                except select.error as ex:
-                    if ex[0] == EINTR:
-                        continue
+                ready_fds = []
+                # Unlike select, poll timeout is in millis.
+                for fd, event in poller.poll(1000):
+                    if event & (select.POLLIN | select.POLLHUP):
+                        # Data can be read (for POLLHUP peer hang up, so reads 
will return
+                        # 0 bytes, in which case we want to break out - this 
is consistent
+                        # with how select behaves).
+                        ready_fds.append(fd_reverse_map[fd])
                     else:
-                        raise
+                        # Could be POLLERR or POLLNVAL (select would raise in 
this case).
+                        raise PySparkRuntimeError(f"Polling error - event 
{event} on fd {fd}")

Review Comment:
   Most of the `PySparkRuntimeError` are user facing exceptions with proper 
error classes and actionable error messages. If this is a rare low level system 
issue, it's better to keep the original exception (OSError) error message. WDYT?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to