Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14332#discussion_r72009839
--- Diff:
examples/src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala ---
@@ -54,14 +54,13 @@ object DataFrameExample {
}
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14132#discussion_r72010521
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1774,6 +1775,49 @@ class Analyzer(
}
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14331#discussion_r72010784
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/interface.scala
---
@@ -173,8 +190,18 @@ case class CatalogTable(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14334
**[Test build #62797 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62797/consoleFull)**
for PR 14334 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14297
**[Test build #62786 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62786/consoleFull)**
for PR 14297 at commit
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/14132#discussion_r72011394
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1774,6 +1775,49 @@ class Analyzer(
}
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/14333
I don't understand the last change. As far as I can see it can be destroyed
inside the loop iteration. It's also possible to reuse the broadcast (declare
outside the loop), and unpersist each
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/14335#discussion_r72011821
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala ---
@@ -472,12 +473,13 @@ final class OnlineLDAOptimizer extends
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14283
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user lovexi opened a pull request:
https://github.com/apache/spark/pull/14338
Make parameters configurable in BlockManager
## What changes were proposed in this pull request?
Make parameters configurable in BlockManager class, such as max_attempts
and sleep_time
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/14331
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14333
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62778/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14333
**[Test build #62778 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62778/consoleFull)**
for PR 14333 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14333
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user sarutak commented on the issue:
https://github.com/apache/spark/pull/13738
Sorry for the late response. I'll try it the way you mentioned.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14333
**[Test build #62778 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62778/consoleFull)**
for PR 14333 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14333
**[Test build #62779 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62779/consoleFull)**
for PR 14333 at commit
GitHub user HyukjinKwon opened a pull request:
https://github.com/apache/spark/pull/14339
[SPARK-16698][SQL] Field names having dots should be allowed for
datasources based on FileFormat
## What changes were proposed in this pull request?
It seems this is a regression
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/13382
LGTM as well, so I'm going to merge this into master for inclusion in Spark
2.1.0. Thanks @dafrista for writing this and to @ericl for helping with review.
---
If your project is set up for it,
Github user tnachen commented on the issue:
https://github.com/apache/spark/pull/13077
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/14340
[SPARK-16534][Streaming][Kafka] Add Python API support for Kafka 0.10
connector
## What changes were proposed in this pull request?
This PR adds the support of Python API for Spark
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14339
can we test it in `SQLQuerySuite` or something? I think
`FileSourceStrategySuite` should be used to test the strategy.
---
If your project is set up for it, you can reply to this email and have
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005309
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -322,46 +322,135 @@ private[sql] class JDBCRDD(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14331
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14327
**[Test build #62788 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62788/consoleFull)**
for PR 14327 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14327
Besides, according to the JIRA, this may need to be backported to 1.6.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14313
**[Test build #62789 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62789/consoleFull)**
for PR 14313 at commit
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14313
@cloud-fan I just addressed your comments. I added another argument in
`JDBCConversion` for `MutableRow` so that we can avoid type-boxing.
---
If your project is set up for it, you can reply
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72008447
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -322,46 +322,133 @@ private[sql] class JDBCRDD(
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/14341
[Minor][Doc][SQL] Fix two documents regarding size in bytes
## What changes were proposed in this pull request?
Fix two places in SQLConf documents regarding size in bytes and statistics.
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72008403
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -322,46 +322,133 @@ private[sql] class JDBCRDD(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14341
**[Test build #62790 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62790/consoleFull)**
for PR 14341 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14339#discussion_r72008560
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -2982,4 +2982,19 @@ class SQLQuerySuite extends QueryTest with
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/14327#discussion_r72008576
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -150,11 +150,13 @@ class SimpleTestOptimizer extends
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14339
LGTM, cc @liancheng to take another look
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/14337
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/14337
@ooq next time please open a pull request against master branch rather than
2.0. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/13077
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/13077
**[Test build #62783 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62783/consoleFull)**
for PR 13077 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14313
**[Test build #62792 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62792/consoleFull)**
for PR 14313 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14339
**[Test build #62785 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62785/consoleFull)**
for PR 14339 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/14327#discussion_r72009385
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -150,11 +150,13 @@ class SimpleTestOptimizer extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14339
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14339
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62785/
Test PASSed.
---
Github user markhamstra commented on a diff in the pull request:
https://github.com/apache/spark/pull/14332#discussion_r7120
--- Diff:
examples/src/main/scala/org/apache/spark/examples/ml/DataFrameExample.scala ---
@@ -54,14 +54,13 @@ object DataFrameExample {
}
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14297
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14297
**[Test build #62775 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62775/consoleFull)**
for PR 14297 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14302#discussion_r72001037
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/command/tables.scala ---
@@ -520,7 +522,7 @@ case class DescribeTableCommand(table:
Github user nblintao commented on the issue:
https://github.com/apache/spark/pull/14204
@yhuai Does the column exist? Are all of the cells in the column blank?
That's weird. It was OK when I committed. I'll check it out later.
---
If your project is set up for it, you can reply
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14331
**[Test build #62776 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62776/consoleFull)**
for PR 14331 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/14327#discussion_r72002405
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
---
@@ -422,6 +422,32 @@ class DatasetSuite extends QueryTest with
Github user tnachen commented on the issue:
https://github.com/apache/spark/pull/13051
@srowen Or if you could help :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14327
**[Test build #62782 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62782/consoleFull)**
for PR 14327 at commit
Github user dafrista commented on the issue:
https://github.com/apache/spark/pull/13382
Great. Thanks @JoshRosen my JIRA username is chobrian.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14335#discussion_r72003627
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala ---
@@ -472,12 +473,13 @@ final class OnlineLDAOptimizer extends
Github user JoshRosen commented on the issue:
https://github.com/apache/spark/pull/13382
Assigned. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user databricks-jenkins commented on the issue:
https://github.com/apache/spark/pull/36
**[Test build #60 has
started](https://jenkins.test.databricks.com/job/spark-pull-request-builder/60/consoleFull)**
for PR 36 at commit
Github user databricks-jenkins commented on the issue:
https://github.com/apache/spark/pull/36
**[Test build #60 has
finished](https://jenkins.test.databricks.com/job/spark-pull-request-builder/60/consoleFull)**
for PR 36 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14339#discussion_r72004263
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/FileSourceStrategySuite.scala
---
@@ -242,6 +242,22 @@ class
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14339
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005085
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -322,46 +322,135 @@ private[sql] class JDBCRDD(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14297
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62775/
Test PASSed.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005471
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -407,84 +496,8 @@ private[sql] class JDBCRDD(
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005487
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -322,46 +322,135 @@ private[sql] class JDBCRDD(
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/14339
Sure!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14297
**[Test build #62775 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62775/consoleFull)**
for PR 14297 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14297
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005398
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -322,46 +322,135 @@ private[sql] class JDBCRDD(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14331
**[Test build #62776 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62776/consoleFull)**
for PR 14331 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14333
**[Test build #62793 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62793/consoleFull)**
for PR 14333 at commit
Github user nblintao commented on a diff in the pull request:
https://github.com/apache/spark/pull/13620#discussion_r71999178
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala ---
@@ -369,3 +375,246 @@ private[ui] class AllJobsPage(parent: JobsTab)
extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14337
**[Test build #62774 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62774/consoleFull)**
for PR 14337 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14337
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62774/
Test PASSed.
---
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/14283
thanks for the review, merging to master!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/14334
Hi, @liancheng .
The failure is due to `LogicalPlanToSQLSuite`. It verifies the generated
SQLs. Please use the following command to update the generated SQL answer sets.
```bash
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14338
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/14333
@srowen
The `bcNewCenters` in `KMeans` has some problem.
Check the code logic in detail, we can find that in each loop, it should
destroy the broadcast var `bcNewCenters` generated in
Github user WeichenXu123 commented on a diff in the pull request:
https://github.com/apache/spark/pull/14335#discussion_r72003530
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/clustering/LDAOptimizer.scala ---
@@ -472,12 +473,13 @@ final class OnlineLDAOptimizer extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14340
**[Test build #62784 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62784/consoleFull)**
for PR 14340 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14339
**[Test build #62785 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62785/consoleFull)**
for PR 14339 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005232
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -407,84 +496,8 @@ private[sql] class JDBCRDD(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14333
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14333
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62779/
Test PASSed.
---
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72005172
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -407,84 +496,8 @@ private[sql] class JDBCRDD(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14333
**[Test build #62779 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62779/consoleFull)**
for PR 14333 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14331
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62776/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14331
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62777/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14331
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14331
**[Test build #62777 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62777/consoleFull)**
for PR 14331 at commit
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14327
cc @cloud-fan @liancheng @yhuai Please review this change. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/14313#discussion_r72007301
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JDBCRDD.scala
---
@@ -407,84 +496,8 @@ private[sql] class JDBCRDD(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14327
**[Test build #62782 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62782/consoleFull)**
for PR 14327 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14132#discussion_r72008134
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -1774,6 +1775,49 @@ class Analyzer(
}
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14340
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/14340
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/62784/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/14340
**[Test build #62784 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62784/consoleFull)**
for PR 14340 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14327#discussion_r72008673
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
---
@@ -150,11 +150,13 @@ class SimpleTestOptimizer extends
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/14341
cc @cloud-fan @liancheng Just minor document changes. Please review if this
change is proper. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/14296#discussion_r72009599
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala
---
@@ -46,6 +46,14 @@ class SQLQuerySuite extends QueryTest with
GitHub user rxin opened a pull request:
https://github.com/apache/spark/pull/14342
[SPARK-16685] Remove audit-release scripts.
## What changes were proposed in this pull request?
This patch removes dev/audit-release. It was initially created to do basic
release auditing, but we
1 - 100 of 290 matches
Mail list logo