Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17876
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17844
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17844
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76600/
Test FAILed.
---
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/15259
@hvanhovell I updated PR description.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user wangyum commented on the issue:
https://github.com/apache/spark/pull/15259
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15259
**[Test build #76610 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76610/testReport)**
for PR 15259 at commit
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/15435
jenkins test please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17887
**[Test build #76611 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76611/testReport)**
for PR 17887 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17901
**[Test build #76603 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76603/testReport)**
for PR 17901 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17901
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76603/
Test FAILed.
---
GitHub user felixcheung opened a pull request:
https://github.com/apache/spark/pull/17905
[SPARK-20661][SPARKR][TEST][FOLLOWUP] SparkR tableNames() test fails
## What changes were proposed in this pull request?
Change it to check for relative change like in this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17901
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user 10110346 opened a pull request:
https://github.com/apache/spark/pull/17906
[SPARK-20665][SQL]"Bround" function return NULL
## What changes were proposed in this pull request?
>select bround(12.3, 2);
>NULL
For this case, the expected result is 12.3, but it
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17666
**[Test build #76604 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76604/testReport)**
for PR 17666 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17905
**[Test build #76612 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76612/testReport)**
for PR 17905 at commit
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17901
you can rebase to pick up the fix for the R tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17906
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17666
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76604/
Test FAILed.
---
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/15435
please rebase to pick up the fix for R tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17666
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17896
Please rebase to pick up fix for R tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17893
Please rebase to pick up fix for R tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17900
Please rebase to pick up fix for R tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17904
Please rebase to pick up fix for R tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17904
This is kinda weird - I don't know why it's running R tests
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/17869
@HyukjinKwon
You are right,
add Utils.clearLocalRootDirs() in the first of beforeEach or in the last of
afterEach, The results are correct.
I choose to add Utils.clearLocalRootDirs()
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17901#discussion_r115401280
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1212,22 +1209,27 @@ case class
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/17879
ping @yanboliang @felixcheung
This is needed for one-hot encoding to be consistent with R, therefore
enabling directly comparison of Spark results to R. Could you guys please take
a look?
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17666
Please rebase to pick up fix for the R tests.
Though again, I'm not sure why it is running R tests for this PR - is the
change detection logic broken somehow?
---
If your project is
GitHub user ghoto opened a pull request:
https://github.com/apache/spark/pull/17907
SPARK-7856 Principal components and variance using computeSVD()
## What changes were proposed in this pull request?
The current implementation of
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17907
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17901
Thank you everybody. Let me ty to address the comments.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17879#discussion_r115401566
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -131,6 +163,12 @@ object StringIndexer extends
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17901#discussion_r115401951
--- Diff: python/pyspark/sql/functions.py ---
@@ -144,12 +144,6 @@ def _():
'measured in radians.',
}
-_functions_2_2
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16985
**[Test build #76614 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76614/testReport)**
for PR 16985 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17644
**[Test build #76613 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76613/testReport)**
for PR 17644 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17901#discussion_r115402276
--- Diff: python/pyspark/sql/functions.py ---
@@ -144,12 +144,6 @@ def _():
'measured in radians.',
}
-_functions_2_2
Github user falaki commented on the issue:
https://github.com/apache/spark/pull/17905
@felixcheung this approach is fine, but I think it is better if unit tests
do not leave any side-effects to begin with. In this case every test should
clean up state before and after (similar to
Github user actuaryzhang commented on a diff in the pull request:
https://github.com/apache/spark/pull/17879#discussion_r115402478
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -131,6 +163,12 @@ object StringIndexer extends
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/17879
@felixcheung Thanks. I will update the annotation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17905
that I agree completely, @falaki - maybe we should audit the Scala SQL
tests - pretty sure R tests do not leave anything behind and only fails
apparently when running Scala tests before running
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17869#discussion_r115402836
--- Diff: core/src/test/scala/org/apache/spark/SortShuffleSuite.scala ---
@@ -50,6 +50,7 @@ class SortShuffleSuite extends ShuffleSuite with
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17666
ok, thanks! no, I think I do not touch on that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on the issue:
https://github.com/apache/spark/pull/17879
@actuaryzhang There seems something wrong with Github's webpage, so I can't
directly reply the above comment. `ALSModelParams.getColdStartStrategy` is one
example.
---
If your project is set up
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17666
**[Test build #76615 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76615/testReport)**
for PR 17666 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17901#discussion_r115403383
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/datetimeExpressions.scala
---
@@ -1212,22 +1209,27 @@ case class
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17905
I think this might be the reason? @yhuai @gatorsmile Am I reading these
right that these tables are created but never dropped?
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17711
**[Test build #76606 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76606/testReport)**
for PR 17711 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17711
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17711
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76606/
Test FAILed.
---
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17879#discussion_r115403961
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/feature/StringIndexer.scala ---
@@ -131,6 +163,12 @@ object StringIndexer extends
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/17869
@HyukjinKwon
I suggest to add to the beforeAllã
If the added beforeEach, Most of the unit tests will run twice.
What do you think?
---
If your project is set up for it, you can
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17858
**[Test build #76617 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76617/testReport)**
for PR 17858 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17844
**[Test build #76618 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76618/testReport)**
for PR 17844 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17876
**[Test build #76616 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76616/testReport)**
for PR 17876 at commit
Github user felixcheung commented on a diff in the pull request:
https://github.com/apache/spark/pull/17901#discussion_r115404160
--- Diff: python/pyspark/sql/functions.py ---
@@ -144,12 +144,6 @@ def _():
'measured in radians.',
}
-_functions_2_2
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17901#discussion_r115404224
--- Diff: R/pkg/R/functions.R ---
@@ -1752,15 +1752,15 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column into a
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/17666
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17666
**[Test build #76602 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76602/testReport)**
for PR 17666 at commit
Github user heary-cao commented on the issue:
https://github.com/apache/spark/pull/17869
@HyukjinKwon
I suggest to add to the beforeAllã
If the added beforeEach, Most of the unit tests will run the
Utils.clearLocalRootDirs() twice.
What do you think?
---
If your
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
@felixcheung you are right. That is the problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
@falaki's PR did not actually trigger that test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17666
Build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17666
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76602/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17666
**[Test build #76608 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76608/testReport)**
for PR 17666 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17666
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17666
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76608/
Test FAILed.
---
Github user map222 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17865#discussion_r115404673
--- Diff: python/pyspark/sql/functions.py ---
@@ -153,7 +173,7 @@ def _():
# math functions that take two arguments as input
_binary_mathfunctions
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17858#discussion_r115404865
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/InsertIntoHiveTable.scala
---
@@ -97,12 +97,23 @@ case class InsertIntoHiveTable(
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17869
I think that's fine. It should be safe.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/15435
@felixcheung allready updated..
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user WeichenXu123 commented on the issue:
https://github.com/apache/spark/pull/15435
jenkins test please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17904
**[Test build #76607 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76607/testReport)**
for PR 17904 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17904
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17904
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76607/
Test FAILed.
---
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17905
hmm, spoke too soon I think - looks to me like all the `withTable` clause
are in place and complete.
not sure what can be leaking through then..
---
If your project is set up for it, you
Github user yhuai commented on the issue:
https://github.com/apache/spark/pull/17905
i see. I think
https://github.com/apache/spark/pull/17905/commits/d4c1a9db25ee7386f7b12e4dabb54210a9892510
is good. How about we get it checked in first (after jenkins passes)?
---
If your project
Github user map222 commented on the issue:
https://github.com/apache/spark/pull/17865
@gatorsmile I checked four functions, `approx_count_distinct`, `coalesce`,
`covar_samp`, and `countDistinct`, comparing the python and Scala
documentation. None of them are the same. My guess is
Github user felixcheung commented on the issue:
https://github.com/apache/spark/pull/17905
right. I think it's a good way to decouple R tests from any earlier states
and also not to mask the error/leak. I'll get that in when Jenkins pass (and
see if I could figure out what is leaked)
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17887#discussion_r115406428
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala
---
@@ -1168,6 +1169,18 @@ class DatasetSuite extends QueryTest with
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17879
**[Test build #76619 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76619/testReport)**
for PR 17879 at commit
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/17901#discussion_r115406526
--- Diff: R/pkg/R/functions.R ---
@@ -1752,15 +1752,15 @@ setMethod("toRadians",
#' to_date
#'
-#' Converts the column into a
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17879
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76619/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17879
**[Test build #76619 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76619/testReport)**
for PR 17879 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17879
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/17904
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/17887#discussion_r115406775
--- Diff:
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ExpressionParserSuite.scala
---
@@ -413,38 +428,102 @@ class
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17904
**[Test build #76620 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76620/testReport)**
for PR 17904 at commit
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/17904
I'm not sure why it's failing those tests, plus my branch is up to date
with master (minus one unrelated commit)
---
If your project is set up for it, you can reply to this email and have your
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/17879
Thanks much @felixcheung and @viirya. I have addressed your comments.
- update from 2.2 to 2.3
- change `freq_desc` to `frequency_desc`.
- move toLowerCase to the getter method.
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/17908
[SPARK-20667] [SQL] [TESTS] Cleanup the cataloged metadata after completing
the package of sql/core and sql/hive
## What changes were proposed in this pull request?
So far, we do not
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17879
**[Test build #76621 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76621/testReport)**
for PR 17879 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16989
**[Test build #76622 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76622/testReport)**
for PR 16989 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17905
How about https://github.com/apache/spark/pull/17908? It tries to reset the
cataloged metadata objects and temporary objects.
---
If your project is set up for it, you can reply to this email
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17865
**[Test build #76625 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76625/testReport)**
for PR 17865 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17908
**[Test build #76623 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76623/testReport)**
for PR 17908 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17879
**[Test build #76624 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76624/testReport)**
for PR 17879 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17908#discussion_r115407765
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/test/TestHive.scala ---
@@ -488,14 +488,9 @@ private[hive] class TestHiveSparkSession(
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/17908#discussion_r115407771
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/test/TestHive.scala ---
@@ -488,14 +488,9 @@ private[hive] class TestHiveSparkSession(
1 - 100 of 573 matches
Mail list logo