Github user bdwyer2 commented on the issue:
https://github.com/apache/spark/pull/16247
@shivaram @felixcheung I'll close this PR so that one of you can take over
in order to have it done in time for the RC.
---
If your project is set up for it, you can reply to this email and have
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16247
@bdwyer2 Let us know if you have problems setting up the environment - If
so me or @felixcheung can open a new PR that includes your changes (we can
still assign the JIRA as your contribution)
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
yes, something like what's being used here
https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala#L49
---
If your project is set
Github user bdwyer2 commented on the issue:
https://github.com/apache/spark/pull/16247
How would we access that value on the scala side? Would
`sparkContext.hadoopConfiguration.get("spark.sql.warehouse.default.dir")` work?
I'm currently unable to compile Spark which makes
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
But yes _other_ Spark config properties would be set by the user in
sparkConfig parameter of sparkR.session method. We would just add to that like
```
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
But yes other Spark config properties would be set by the user in
sparkConfig parameter of sparkR.session method. We would just add to that
without adding another parameter to sparkR.session
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
I mean it as something we set to the SparkContext or SparkSession and not a
parameter of sparkR.session().
---
If your project is set up for it, you can reply to this email and have your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70093/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70093 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70093/consoleFull)**
for PR 16247 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70093 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70093/consoleFull)**
for PR 16247 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16247
Yeah disabling Hive for the test is fine. @bdwyer2 Can you add the new
config flag as well ? We can do one final pass of review after that
---
If your project is set up for it, you can reply to
Github user bdwyer2 commented on the issue:
https://github.com/apache/spark/pull/16247
Would calling the test with this be an acceptable solution?
```R
sparkR.session(enableHiveSupport = FALSE)
```
---
If your project is set up for it, you can reply to this email and have
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
Possibly, SPARK-16027 was just a hack, the root issue remains I think
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16247
This test failure seems related with this PR. It seems because the previous
Hive-enabled spark session is not closed properly between `test_saprkSQL.R` and
`test_sparkR.R` here, in particular,
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
re: test failure. it might be related to this change? the call stack is
hidden, it should be trying to call into
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/16247
a new property for default warehouse LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70047 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70047/consoleFull)**
for PR 16247 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70047/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70047 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70047/consoleFull)**
for PR 16247 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70040/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70040 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70040/consoleFull)**
for PR 16247 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70040 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70040/consoleFull)**
for PR 16247 at commit
Github user jodersky commented on the issue:
https://github.com/apache/spark/pull/16247
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user bdwyer2 commented on the issue:
https://github.com/apache/spark/pull/16247
I don't see how my last commit could of caused this
```
functions in sparkR.R: .
SparkSQL functions: Spark package found in SPARK_HOME:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70035/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70035 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70035/consoleFull)**
for PR 16247 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70035 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70035/consoleFull)**
for PR 16247 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/70032/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70032 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70032/consoleFull)**
for PR 16247 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #70032 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/70032/consoleFull)**
for PR 16247 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16247
@bdwyer2 The test case idea sounds good !
Regarding the conf naming for warehouse dir lets also check with
contributors who are more familiar with SQL.
cc @yhuai @cloud-fan
---
If
Github user bdwyer2 commented on the issue:
https://github.com/apache/spark/pull/16247
@shivaram I can create a test to verify the output of `list.files()` is the
same before and after running `sparkR.session()`.
---
If your project is set up for it, you can reply to this email and
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
... that's actually what I meant with "we might want to pass the R
tempdir() along as a property" <-- this would be a new property and not
`spark.sql.warehouse.dir`
---
If your project is set
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16247
Or we could introduce a new property say spark.sql.default.warehouse and
set that to tmpdir()
On Dec 10, 2016 16:53, "Felix Cheung" wrote:
> I think
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/16247
I think we should only change `spark.sql.warehouse.dir` when we are loading
SparkR as a package. This should minimize changes in the case where we are
running in cluster mode and so on.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16247
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69975/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16247
**[Test build #69975 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69975/consoleFull)**
for PR 16247 at commit
Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/16247
@bdwyer2 One more thing: Is there a good way to test this ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
44 matches
Mail list logo