Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/848#discussion_r12977252
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -326,8 +326,7 @@ private[spark] class SparkSubmitArguments(args:
Seq[Stri
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/848#discussion_r12975962
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -326,8 +326,7 @@ private[spark] class SparkSubmitArguments(args:
Seq[Str
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/848
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabl
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43862984
I independently tested this on Yarn 2.4 running in a VM where I could
reproduce the problem. This change indeed allows Jars loaded with --jars to be
accessible in executors.
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43836661
This doesn't apply to standalone or Mesos. For these two modes, Spark
submit translates `--jars` to `spark.jars`, then SparkContext uploads these
jars to the HTTP serve
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43832565
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15133/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43832563
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user mateiz commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43827704
On standalone mode and Mesos, does this fix require the JARs to be
accessible from the same URL on all nodes?
---
If your project is set up for it, you can reply to this e
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43825518
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43825524
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43822767
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43822769
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15128/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43816549
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43816530
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43816325
@dbtsai Could you backport the patch to branch-0.9 and test it on your
cluster?
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/848#discussion_r12923805
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -479,37 +485,24 @@ object ClientBase {
extraClassPath.f
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/848#discussion_r12923791
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -479,37 +485,24 @@ object ClientBase {
extraClassPath.f
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43815204
Yes, we can also control the ordering in this way.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your p
Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43814642
It works under driver before, so the major issue is those files are not in
executor's distributed cache. But I like the idea to add them explicitly so
we'll not miss anythi
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43814337
The symbolic links may not be under the PWD. That is why it didn't work
before.
---
If your project is set up for it, you can reply to this email and have your
reply appea
Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43812877
Thanks. It looks great for me, and better than my patch.
cachedSecondaryJarLinks.foreach(addPwdClasspathEntry) is not needed since
we have
addPwdClasspathEntry
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/848#discussion_r12921709
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -479,37 +485,24 @@ object ClientBase {
extraClassPath.f
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/848#discussion_r12921552
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -479,37 +485,24 @@ object ClientBase {
extraClassPath.f
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43804540
Merged build finished. All automated tests passed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43804541
All automated tests passed.
Refer to this link for build results:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/15125/
---
If your project
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43800111
Merged build started.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/848#issuecomment-43800093
Merged build triggered.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not ha
GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/848
[SPARK-1870] Make spark-submit --jars work in yarn-cluster mode.
Sent secondary jars to distributed cache of all containers and add the
cached jars to classpath before executors start.
`spark
28 matches
Mail list logo