Github user felixcheung closed the pull request at:
https://github.com/apache/spark/pull/10652
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature i
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10652#discussion_r50342342
--- Diff: core/src/main/scala/org/apache/spark/deploy/RPackageUtils.scala
---
@@ -36,7 +36,8 @@ private[deploy] object RPackageUtils extends Logging {
Github user felixcheung commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173121016
I realize that, my point is even in client mode the driver could be running
on a worker machine, as in the case Spark job is submitted from another YARN
app.
Github user sun-rui commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173116720
It is possible to get deploy mode from "spark.submit.deployMode", and check
if it is "client". You can take a look at
https://github.com/apache/spark/blob/master/core/s
Github user felixcheung commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173110574
I don't know if there is a way to distinguish that.
It could be `spark-submit` or calling `SparkSubmit` class from Oozie and
running the job in YARN client mode
Github user sun-rui commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173084751
@felixcheung, yes, something like that
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project d
Github user felixcheung commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173078695
@sun-rui is it `spark-submit foo.R`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user sun-rui commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173070829
RRunner is not only for running driver on cluster, but also for running an
R script locally in client mode.
---
If your project is set up for it, you can reply to thi
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10652#discussion_r50209041
--- Diff: core/src/main/scala/org/apache/spark/deploy/RPackageUtils.scala
---
@@ -36,7 +36,8 @@ private[deploy] object RPackageUtils extends Logging {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173062880
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173062882
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173062773
**[Test build #49735 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/49735/consoleFull)**
for PR 10652 at commit
[`78eb194`](https://g
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173037120
**[Test build #49735 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/49735/consoleFull)**
for PR 10652 at commit
[`78eb194`](https://gi
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173032446
Yeah doing it just for the cluster mode driver seems fine to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user felixcheung commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173026763
Driver could also be running in YARN cluster mode in which a clean state
might make sense?
To me this is just to reduce the level of variability. And this was br
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-173014932
So I'm not completely sure this is a good idea. Users might have their own
R environment setup scripts in their home directory (site-file or init-file as
in the R docs
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/10652#discussion_r49159546
--- Diff: core/src/main/scala/org/apache/spark/deploy/RPackageUtils.scala
---
@@ -36,7 +36,8 @@ private[deploy] object RPackageUtils extends Logging {
Github user sun-rui commented on a diff in the pull request:
https://github.com/apache/spark/pull/10652#discussion_r49152626
--- Diff: core/src/main/scala/org/apache/spark/deploy/RPackageUtils.scala
---
@@ -36,7 +36,8 @@ private[deploy] object RPackageUtils extends Logging {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169863988
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169863989
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
Github user felixcheung commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169861623
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169842106
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169842103
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user felixcheung commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169839126
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169837014
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your projec
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10652#issuecomment-169837015
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/10652
[SPARK-12699][SPARKR] R driver process should start in a clean state
Currently we have R worker process launched with the --vanilla option that
brings it up in a clean state (without init profi
27 matches
Mail list logo