Github user BryanCutler commented on the pull request:
https://github.com/apache/spark/pull/6205#issuecomment-107262210
@squito I was thinking to use this in place of what we have, but actually
it wouldn't cover the case where `Await.result` times out while the future is
still not completed. To cover all cases we could add a function in
`RpcTimeout` that would take a `Future[T]` as input and use `recover` (as in
the code above) to handle a timeout on the future. Then we would have to
subclass `TimeoutException`, as you suggested, so we don't end up amending the
message twice. If you want, I could code this up to see how it would look?
One thing I'm not too sure of though, when using `recover` it requires an
`ExecutionContext`. In the test code I posted, it was fine to import the
implicit conversion to the default global context, but I don't know if the same
holds true for the rest of Spark. Any thoughts on this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]