Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14897
LGTM except a comment about documentation. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14897
@cloud-fan I am fine if you think users can figure it out. Maybe document
it in [`Spark SQL, DataFrames and Datasets
Guide`](http://spark.apache.org/docs/latest/sql-programming-guide.html)
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14897
```Scala
sql(s"SHOW CREATE TABLE $globalTempDB.src").show()
```
We got the following error:
```
Database 'global_temp' not found;
```
---
If your project is set
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/14897#discussion_r81281510
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
@@ -2433,31 +2433,65 @@ class Dataset[T] private[sql](
}
/**
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15309
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66154/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15309
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15309
**[Test build #66154 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66154/consoleFull)**
for PR 15309 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15308
**[Test build #66157 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66157/consoleFull)**
for PR 15308 at commit
Github user zhengruifeng commented on the issue:
https://github.com/apache/spark/pull/12135
Can any admin verify this PR? It's been a long time and I really need this
feature...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15308
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15306
**[Test build #66155 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66155/consoleFull)**
for PR 15306 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/12394
**[Test build #66156 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66156/consoleFull)**
for PR 12394 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15308
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66153/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15308
**[Test build #66153 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66153/consoleFull)**
for PR 15308 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15309
**[Test build #66154 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66154/consoleFull)**
for PR 15309 at commit
GitHub user jagadeesanas2 opened a pull request:
https://github.com/apache/spark/pull/15309
[SPARK-17736] [Documentation][SparkR] [Update R README for rmarkdown,â¦
## What changes were proposed in this pull request?
To build R docs (which are built when R tests are run),
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15308
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/15178
@holdenk For the first one question, as this executor side broadcast
directly uses RDD blocks instead of creating broadcast blocks, once the
broadcast is destroyed, only the broadcasted object is
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15308
**[Test build #66153 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66153/consoleFull)**
for PR 15308 at commit
GitHub user hvanhovell opened a pull request:
https://github.com/apache/spark/pull/15308
[SPARK-17717][SQL] Add Exist/find methods to Catalog [FOLLOW-UP]
## What changes were proposed in this pull request?
We added find and exists methods for Databases, Tables and Functions to
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/15308
cc @rxin @markhamstra
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15263
**[Test build #66152 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66152/consoleFull)**
for PR 15263 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/15263#discussion_r81278191
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -19,9 +19,10 @@ package
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/15293#discussion_r81277788
--- Diff: docs/mllib-linear-methods.md ---
@@ -78,6 +78,10 @@ methods `spark.mllib` supports:
+A binary label y is denoted as either
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14553
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66148/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14553
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15293
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15293
**[Test build #66151 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66151/consoleFull)**
for PR 15293 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15293
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66151/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14553
**[Test build #66148 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66148/consoleFull)**
for PR 14553 at commit
Github user zhengruifeng commented on the issue:
https://github.com/apache/spark/pull/12718
This PR is too old and have many conflict with current master. I will close
it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user zhengruifeng closed the pull request at:
https://github.com/apache/spark/pull/12718
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15293
**[Test build #66151 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66151/consoleFull)**
for PR 15293 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15295
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66147/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15295
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15295
**[Test build #66147 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66147/consoleFull)**
for PR 15295 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15306
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15306
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66143/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15306
**[Test build #66143 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66143/consoleFull)**
for PR 15306 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15263
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66146/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14897
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66142/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14897
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15263
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15263
**[Test build #66146 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66146/consoleFull)**
for PR 15263 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14897
**[Test build #66142 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66142/consoleFull)**
for PR 14897 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15090#discussion_r81275778
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/AnalyzeColumnCommand.scala
---
@@ -0,0 +1,175 @@
+/*
+ * Licensed to
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15292
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15292
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66145/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15090
**[Test build #66150 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66150/consoleFull)**
for PR 15090 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15292
**[Test build #66145 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66145/consoleFull)**
for PR 15292 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66149/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15292
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15292
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66144/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15307
**[Test build #66149 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66149/consoleFull)**
for PR 15307 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15307
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15292
**[Test build #66144 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66144/consoleFull)**
for PR 15292 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15263#discussion_r81275138
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcRelationProvider.scala
---
@@ -19,9 +19,10 @@ package
Github user ueshin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15276#discussion_r81274823
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -906,7 +906,7 @@ case class
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15263#discussion_r81274703
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -697,4 +697,27 @@ object JdbcUtils extends
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/15263#discussion_r81274640
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala
---
@@ -697,4 +697,27 @@ object JdbcUtils extends
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15301#discussion_r81274626
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala ---
@@ -102,6 +102,83 @@ abstract class Catalog {
def listColumns(dbName:
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/15276#discussion_r81274571
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/objects/objects.scala
---
@@ -906,7 +906,7 @@ case class AssertNotNull(child:
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15307
**[Test build #66149 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66149/consoleFull)**
for PR 15307 at commit
Github user ueshin commented on the issue:
https://github.com/apache/spark/pull/15276
@rxin Yes, in my case, I used `RowEncoder` with a schema having a lot of
fields (>250) and processed some logic. `RowEncoder` uses `GetExternalRowField`
and `ValidateExternalType` and they add 2
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/15307
[SPARK-17731][SQL][STREAMING] Metrics for structured streaming
## What changes were proposed in this pull request?
Metrics are needed for monitoring structured streaming apps. Here is the
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/15257#discussion_r81273658
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -46,12 +57,17 @@ object Literal {
case s:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15257
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66141/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15257
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15257
**[Test build #66141 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66141/consoleFull)**
for PR 15257 at commit
Github user hvanhovell commented on a diff in the pull request:
https://github.com/apache/spark/pull/15301#discussion_r81272278
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala ---
@@ -102,6 +102,83 @@ abstract class Catalog {
def
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15300
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66140/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15300
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15300
**[Test build #66140 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66140/consoleFull)**
for PR 15300 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14553
**[Test build #66148 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66148/consoleFull)**
for PR 14553 at commit
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/15301#discussion_r81270990
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala ---
@@ -102,6 +102,83 @@ abstract class Catalog {
def
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13930
Hmm. Sorry, more specifically, it's not about Hive Analyzer. The exception
is raised during Hive Catalog lookup.
---
If your project is set up for it, you can reply to this email and have
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/15257#discussion_r81270530
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -46,12 +57,17 @@ object Literal {
case s:
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13930
Yep. Right.
The following is the situation here.
0. There was a Hive function: `double f()`.
1. Spark Analyzer makes a Hive function expression whose return type is
Github user maropu commented on a diff in the pull request:
https://github.com/apache/spark/pull/15257#discussion_r81270080
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/literals.scala
---
@@ -46,12 +57,17 @@ object Literal {
case s:
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/13930
oh, how is hive's analyze got involved at here? I am thinking that when we
create the hive function's expression, we will know the expected input type of
the function. Then, spark's analyzer will add
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15295
**[Test build #66147 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66147/consoleFull)**
for PR 15295 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15305#discussion_r81269642
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/ColumnType.scala
---
@@ -589,7 +589,7 @@ private[columnar] case class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15263
**[Test build #66146 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66146/consoleFull)**
for PR 15263 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15306
Thanks - this is super useful!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15305
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/13930
Hi. What analyzer do you mean? The exception cames from Hive Analyzer which
considers DecimalType is different from double.
---
If your project is set up for it, you can reply to this email
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15305
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66139/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15305
**[Test build #66139 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66139/consoleFull)**
for PR 15305 at commit
Github user koeninger commented on the issue:
https://github.com/apache/spark/pull/15102
> It would be nice to be able to do something other than earliest/latest.
That's what Assign and the starting offset arguments to the Subscribe
strategies are for. The
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/15306
cc @srowen want to review this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15292
**[Test build #66145 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66145/consoleFull)**
for PR 15292 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/15295#discussion_r81268551
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/RuntimeConfig.scala
---
@@ -36,6 +37,7 @@ class RuntimeConfig private[sql](sqlConf: SQLConf = new
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/13930#discussion_r81268420
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveSessionCatalog.scala ---
@@ -163,6 +164,19 @@ private[sql] class HiveSessionCatalog(
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15257
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/66137/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15292
**[Test build #66144 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66144/consoleFull)**
for PR 15292 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15257
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15257
**[Test build #66137 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66137/consoleFull)**
for PR 15257 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15306
**[Test build #66143 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/66143/consoleFull)**
for PR 15306 at commit
GitHub user ericl opened a pull request:
https://github.com/apache/spark/pull/15306
[SPARK-17740] Spark tests should mock / interpose HDFS to ensure that
streams are closed
## What changes were proposed in this pull request?
As a followup to SPARK-17666, ensure filesystem
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14897
@gatorsmile After some more thoughts, I think this PR doesn't need to be
blocked by the global conf PR, users can still use `SET
spark.sql.globalTempDatabase` to check the global temp database.
1 - 100 of 626 matches
Mail list logo