[
https://issues.apache.org/jira/browse/SPARK-6980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14541270#comment-14541270
]
Harsh Gupta commented on SPARK-6980:
------------------------------------
[~bryanc] Sure thing .. I am already keeping a watch
> Akka timeout exceptions indicate which conf controls them
> ---------------------------------------------------------
>
> Key: SPARK-6980
> URL: https://issues.apache.org/jira/browse/SPARK-6980
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Reporter: Imran Rashid
> Assignee: Harsh Gupta
> Priority: Minor
> Labels: starter
> Attachments: Spark-6980-Test.scala
>
>
> If you hit one of the akka timeouts, you just get an exception like
> {code}
> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
> {code}
> The exception doesn't indicate how to change the timeout, though there is
> usually (always?) a corresponding setting in {{SparkConf}} . It would be
> nice if the exception including the relevant setting.
> I think this should be pretty easy to do -- we just need to create something
> like a {{NamedTimeout}}. It would have its own {{await}} method, catches the
> akka timeout and throws its own exception. We should change
> {{RpcUtils.askTimeout}} and {{RpcUtils.lookupTimeout}} to always give a
> {{NamedTimeout}}, so we can be sure that anytime we have a timeout, we get a
> better exception.
> Given the latest refactoring to the rpc layer, this needs to be done in both
> {{AkkaUtils}} and {{AkkaRpcEndpoint}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]