Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
Close it as it is resolved somewhere else.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14639
but to keep in mind, currently even spark.master can change while in-flight
- doesn't seem like Spark Scala code prevents that - we could get some very
wrong values. I'm not sure that is super
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14639
It's all of the Runtime Config from the current active SparkSession which
includes all SparkConf.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64043/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64043 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64043/consoleFull)**
for PR 14639 at commit
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
Does this API get only the Spark SQL configurations or including SparkConf?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14639
Are we talking about this
http://spark.apache.org/docs/latest/api/R/sparkR.conf.html?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
If in the future SparkConf is needed, instead of passing all spark conf to
R via env variables, we can expose API for accessing SparkConf in the R
backend, similar to that in Pyspark.
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
Thanks @sun-rui, `EXISTING_SPARKR_BACKEND_PORT` do indicate cluster mode
indirectly for now. But here not only deployMode is unknown in R side, but also
master and other spark configurations. For
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64043 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64043/consoleFull)**
for PR 14639 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64039/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64039 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64039/consoleFull)**
for PR 14639 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64039 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64039/consoleFull)**
for PR 14639 at commit
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
@shivaram @felixcheung @sun-rui My previous commit didn't resolve the
issue. It succeeded just due to it already download spark in cache dir.
I push another commit to fix the issue. Overall,
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
I think there may be a simpler solution. Just as my comment in the JIRA,
"EXISTING_SPARKR_BACKEND_PORT" env variable can be checked, instead of getting
the whole spark conf from JVM into R.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/64036/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64036 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64036/consoleFull)**
for PR 14639 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #64036 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/64036/consoleFull)**
for PR 14639 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/14639
@zjffdu Thanks for clarifying -- I now remember that in the YARN cluster
mode there is no `SPARK_HOME` set. However in this case the JVM comes up first
and the R process then connects to it. So in
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
There's no SPARK_HOME in yarn-cluster mode since the R process is in a
remote host of the yarn cluster rather than in the client host.
---
If your project is set up for it, you can reply to this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/14639
hmm, which is to my point
[here](https://github.com/apache/spark/pull/14639#discussion_r75117822) - why
isn't it working in yarn-cluster mode? is SPARK_HOME not set?
---
If your project is
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
@sun-rui I verified it works in yarn-client mode, also for shell mode
under yarn-client. As spark won't be downloaded as long as spark_home exists
```
if (!is_sparkR_shell() &&
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
@zjffdu, yes, no need to download spark in yarn-client provided that
spark-submit is called to launch an R script. I just want to verify that your
change works in this case.
But note if
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
@sun-rui Why would you think yarn-client mode won't work ? For yarn-client
mode, SPARK should be already installed in the client host, so no need to
download spark.
---
If your project is set up
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
@zjffdu, does your change work on launching an R script in yarn-client
mode? It seems that it won't
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user zjffdu commented on the issue:
https://github.com/apache/spark/pull/14639
Thanks @sun-rui Another commit resolved the downloading issue.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user sun-rui commented on the issue:
https://github.com/apache/spark/pull/14639
This is not only about the correct cache dir under MAC OS, but also in
yarn-cluster mode, there should not be downloading of Spark.
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/63891/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14639
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #63891 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63891/consoleFull)**
for PR 14639 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14639
**[Test build #63891 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/63891/consoleFull)**
for PR 14639 at commit
34 matches
Mail list logo