Github user shivaram commented on a diff in the pull request:

    https://github.com/apache/spark/pull/15471#discussion_r84730780
  
    --- Diff: core/src/main/scala/org/apache/spark/api/r/RBackendHandler.scala 
---
    @@ -83,7 +86,29 @@ private[r] class RBackendHandler(server: RBackend)
               writeString(dos, s"Error: unknown method $methodName")
           }
         } else {
    +      // To avoid timeouts when reading results in SparkR driver, we will 
be regularly sending
    +      // heartbeat responses. We use special code +1 to signal the client 
that backend is
    +      // alive and it should continue blocking for result.
    +      val execService = 
ThreadUtils.newDaemonSingleThreadScheduledExecutor("SparkRKeepAliveThread")
    --- End diff --
    
    I'm not sure how expensive it is to create and destroy an executor service 
each time. Can we just schedule at fixed rate when we get the request and then 
cancel the schedule at the end of the request ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to