Github user liu-zhaokun commented on the issue:
https://github.com/apache/spark/pull/18842
@jerryshao
OK.Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/18837#discussion_r131351705
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerUtils.scala
---
@@ -510,12 +510,20 @@ trait
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80237 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80237/testReport)**
for PR 18779 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18779
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80237/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18779
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131353213
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala
---
@@ -2023,4 +2023,11 @@ class DataFrameSuite extends QueryTest with
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18123
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80242/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18123
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user MLnick commented on a diff in the pull request:
https://github.com/apache/spark/pull/16158#discussion_r131340257
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -133,7 +134,10 @@ class CrossValidator @Since("1.2.0")
Github user eatoncys commented on a diff in the pull request:
https://github.com/apache/spark/pull/18810#discussion_r131340166
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -356,6 +356,16 @@ class CodegenContext
Github user facaiy commented on the issue:
https://github.com/apache/spark/pull/18764
Test failures in pyspark.ml.tests with python2.6, but I don't have the
environment.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18841
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18841
**[Test build #80238 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80238/testReport)**
for PR 18841 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18841
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80238/
Test FAILed.
---
Github user yanboliang commented on the issue:
https://github.com/apache/spark/pull/18764
@facaiy No worried, I will take a look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/18837#discussion_r131348687
--- Diff: resource-managers/mesos/pom.xml ---
@@ -29,7 +29,7 @@
Spark Project Mesos
mesos
-1.0.0
+1.3.0-rc1
---
Github user skonto commented on a diff in the pull request:
https://github.com/apache/spark/pull/18837#discussion_r131350894
--- Diff: docs/running-on-mesos.md ---
@@ -479,6 +479,35 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user liu-zhaokun commented on the issue:
https://github.com/apache/spark/pull/18838
@jerryshao
Thanks for your reply.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/18838
Could we close this one?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80246 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80246/testReport)**
for PR 18779 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17673
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17673
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80243/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17673
**[Test build #80243 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80243/testReport)**
for PR 17673 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18123
**[Test build #80242 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80242/testReport)**
for PR 18123 at commit
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/17673
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user MLnick commented on the issue:
https://github.com/apache/spark/pull/18123
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18123
**[Test build #80242 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80242/testReport)**
for PR 18123 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17673
**[Test build #80243 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80243/testReport)**
for PR 17673 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18779
**[Test build #80244 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80244/testReport)**
for PR 18779 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18838
This introduces an NPE. It also changes what happens when the dir exists. I
don't see a problem that this solves, so I'd close this.
---
If your project is set up for it, you can reply to this
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18842
I would suggest not to remove such configurations even if it is only used
in UT, in case some users depend on them explicitly, abruptly removing them
will break the compatibility.
---
If your
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131349769
--- Diff: sql/core/src/test/resources/sql-tests/inputs/group-by-ordinal.sql
---
@@ -52,8 +52,14 @@ select count(a), a from (select 1 as a) tmp group by 2
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131349806
--- Diff: sql/core/src/test/resources/sql-tests/inputs/order-by-ordinal.sql
---
@@ -34,3 +34,8 @@ set spark.sql.orderByOrdinal=false;
-- 0 is now a
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18838
I think the original purpose is to void reusing the directory and always
create a unique directory. So with the change here seems the semantics is
changed, I don't think it is a reasonable fix
Github user 10110346 commented on a diff in the pull request:
https://github.com/apache/spark/pull/18779#discussion_r131351217
--- Diff: sql/core/src/test/resources/sql-tests/inputs/order-by-ordinal.sql
---
@@ -34,3 +34,8 @@ set spark.sql.orderByOrdinal=false;
-- 0 is now a
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18841
**[Test build #80245 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80245/testReport)**
for PR 18841 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18797
**[Test build #80259 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80259/testReport)**
for PR 18797 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18749
Yep, I agree with 'almost always', and tiny changes might not require them.
I think that's what I said you said? "some cases". See also
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18749
**[Test build #80261 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80261/testReport)**
for PR 18749 at commit
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/18640
LGTM, great to see progress on ORC support.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18797
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18797
**[Test build #80257 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80257/testReport)**
for PR 18797 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18797
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80257/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18797
**[Test build #80258 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80258/testReport)**
for PR 18797 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18843
**[Test build #80255 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80255/testReport)**
for PR 18843 at commit
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18640
I agree with the following, but this does not block those users. This is
only better than putting the dependency on Hive because it also supports more
the other users who are using ML and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18843
**[Test build #80255 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80255/testReport)**
for PR 18843 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18839
LGTM. Merging to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18797
**[Test build #80258 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80258/testReport)**
for PR 18797 at commit
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/18797
@WeichenXu123 there is one more change you'll need, in
`AFTSurvivalRegressionSuite.scala` to also remove a datum with label 0
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18797
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18797
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80259/
Test FAILed.
---
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/18749
I am also afraid somebody might misunderstand what I or a few other
committers say `SGTM` or `looks good` in Spark SQL. When a committer tries to
merge Spark SQL related PRs based on these
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18847
**[Test build #80260 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80260/testReport)**
for PR 18847 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18847
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18847
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80260/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18742
**[Test build #80262 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80262/testReport)**
for PR 18742 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18742
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80262/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18749
**[Test build #80261 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80261/testReport)**
for PR 18749 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18841#discussion_r131495498
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/mathExpressions.scala
---
@@ -170,29 +193,29 @@ case class Pi() extends
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18317#discussion_r131493686
--- Diff:
core/src/main/java/org/apache/spark/util/collection/unsafe/sort/UnsafeSorterSpillReader.java
---
@@ -73,7 +73,9 @@ public
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18787#discussion_r131493567
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala
---
@@ -110,6 +113,67 @@ private[sql] object
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18848
cc @gatorsmile
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18659
**[Test build #80264 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80264/testReport)**
for PR 18659 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18848#discussion_r131486596
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala
---
@@ -638,4 +625,28 @@ object DataSource extends Logging {
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18848
**[Test build #80263 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80263/testReport)**
for PR 18848 at commit
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18847
@gatorsmile Thanks !! To the best of my knowledge, we don't have the
problem of analyze table command failing with java.util.NoSuchElement exception
in 2.2. In 2.2, we used to add the column
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18317#discussion_r131493511
--- Diff: core/src/main/java/org/apache/spark/io/ReadAheadInputStream.java
---
@@ -0,0 +1,279 @@
+/*
+ * Licensed under the Apache License,
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18317#discussion_r131493491
--- Diff: core/src/main/java/org/apache/spark/io/ReadAheadInputStream.java
---
@@ -0,0 +1,279 @@
+/*
+ * Licensed under the Apache License,
Github user sitalkedia commented on a diff in the pull request:
https://github.com/apache/spark/pull/18317#discussion_r131493469
--- Diff: core/src/main/java/org/apache/spark/io/ReadAheadInputStream.java
---
@@ -0,0 +1,279 @@
+/*
+ * Licensed under the Apache License,
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18317
**[Test build #80267 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80267/testReport)**
for PR 18317 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18844
**[Test build #80266 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80266/testReport)**
for PR 18844 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18317
**[Test build #80268 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80268/testReport)**
for PR 18317 at commit
Github user sitalkedia commented on the issue:
https://github.com/apache/spark/pull/18317
@jiangxb1987, @kiszk Addressed review comments, lmk what you guys think.
BTW, this idea can be applied to other places when we block on reading the
input stream like HDFS reading. What
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18797
**[Test build #80259 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80259/testReport)**
for PR 18797 at commit
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/18668#discussion_r131483252
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkHadoopUtil.scala
---
@@ -99,17 +100,30 @@ class SparkHadoopUtil extends Logging {
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/18848
[SPARK-21374][CORE] Fix reading globbed paths from S3 into DF with disabled
FS cache
## What changes were proposed in this pull request?
This PR replaces #18623 to do some clean up.
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80261/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18640
Hi, @liancheng , @zhzhan , @rxin , @marmbrus .
I'm pining you since you worked on #6194 before.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user kiszk commented on the issue:
https://github.com/apache/spark/pull/18640
Can we add any smaller code to use this, too?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/16158#discussion_r131454343
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/CrossValidator.scala ---
@@ -133,7 +134,10 @@ class CrossValidator @Since("1.2.0")
Github user susanxhuynh commented on a diff in the pull request:
https://github.com/apache/spark/pull/18837#discussion_r131431005
--- Diff: docs/running-on-mesos.md ---
@@ -479,6 +479,35 @@ See the [configuration page](configuration.html) for
information on Spark config
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18640
Thank you for review, @kiszk .
The example may be #17980 , #17924, and #17943 .
If possible, in this PR, I want to focus on only `Dependency on ORC` issue.
---
If your project is set
Github user dilipbiswal commented on the issue:
https://github.com/apache/spark/pull/18847
@gatorsmile I just created this PR for you take a look and decide if we
need to back port the above 3 PRs. The problem for SPARK-21599 does not exist
on 2.2 as it was introduced as part of
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18820
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80252/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18820
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17980
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80254/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17980
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17980
**[Test build #80254 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80254/testReport)**
for PR 17980 at commit
Github user BryanCutler commented on a diff in the pull request:
https://github.com/apache/spark/pull/18664#discussion_r131452100
--- Diff: python/pyspark/sql/tests.py ---
@@ -3036,6 +3052,9 @@ def test_toPandas_arrow_toggle(self):
pdf = df.toPandas()
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/18640
Thank you for review, @rxin .
We can use ORC like Parquet now. Parquet is inside `sql/core`, not
`sql/hive`.
---
If your project is set up for it, you can reply to this email and have
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/80253/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/18749
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user dilipbiswal opened a pull request:
https://github.com/apache/spark/pull/18847
[SPARK-12717][SPARK-21031][SPARK-21599][SQL][BRANCH-2.2]] Collecting column
statistics for datasource tables may fail with java.util.NoSuchElementException
## What changes were proposed in
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18843
**[Test build #80256 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80256/testReport)**
for PR 18843 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18797
**[Test build #80257 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80257/testReport)**
for PR 18797 at commit
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/18640
Why are we adding this to core? Why not just the hive module?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/18749
**[Test build #80253 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/80253/testReport)**
for PR 18749 at commit
101 - 200 of 393 matches
Mail list logo