Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20512
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user squito commented on the issue:
https://github.com/apache/spark/pull/20512
this is just far enough outside my expertise I don't have an opinion -- but
@zsxwing might have some thoughts
---
-
To
Github user vundela commented on the issue:
https://github.com/apache/spark/pull/20512
cc @squito @vanzin
Can you please comment on this PR?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
Github user peshopetrov commented on the issue:
https://github.com/apache/spark/pull/20512
Any update?
We have rolled out our Spark clusters with this change and it seems to be
working great. We see no lingering connections on the masters.
---
Github user peshopetrov commented on the issue:
https://github.com/apache/spark/pull/20512
For completeness it should be possible to enable OS-level TCP keep alives.
The client does enable TCP keepalive on its side and it should be possible on
the server too.
However,
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20512
Is it possible that TCP keepalive is disable by kernel, so that your
approach cannot be worked? I was thinking if it is better to add application
level heartbeat msg to detect lost workers?
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20512
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/20512
Can one of the admins verify this patch?
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional