Github user GrahamDennis commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-52564363
@pwendell: I've verified that #1972 fixes the problem I was having. This
PR also addresses a bug (unfiled, but related to SPARK-2878) that you can't
distribute a
Github user GrahamDennis commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-52581284
@rxin: No, #1972 isn't enough. I've updated my example project to
reproduce this problem, see
https://github.com/GrahamDennis/spark-kryo-serialisation
Github user arahuja commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-52446389
I think this may be the issue I have been wrangling with the last couple
days. I see a variety of odd Kryo related errors, slightly different each
time:
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-52407167
Hey @GrahamDennis thanks for an extremely thorough analysis of this issue
here and on the JIRA. I think that @rxin was able to solve this in a PR that
improves the way
Github user GrahamDennis commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-51888165
I've updated my PR, and now instead of getting the Executor process to
download jars before registering itself with the application driver, the Worker
process
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/1890#discussion_r16154021
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -114,12 +115,12 @@ private[spark] class ExecutorRunner(
case
Github user GrahamDennis commented on a diff in the pull request:
https://github.com/apache/spark/pull/1890#discussion_r16155353
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/ExecutorRunner.scala ---
@@ -114,12 +115,12 @@ private[spark] class ExecutorRunner(
GitHub user GrahamDennis opened a pull request:
https://github.com/apache/spark/pull/1890
[SPARK-2878]: Fix custom spark.kryo.registrator
This is a work-in-progress, and I'm looking for feedback on my current
approach. My aim here is to add the user jars specified in SparkConf
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-51767200
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/1890#issuecomment-51805750
FWIW I think this is already what happens in YARN, as we use Hadoop's
distributed cache to send out the jars and include them on the executor
classpath at startup.
---
10 matches
Mail list logo