Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16131
@sethah Thanks for the review. I have updated according to your suggestion.
@yanboliang @srowen Please take another look. Thanks.
---
If your project is set up for it, you
Github user rxin commented on the issue:
https://github.com/apache/spark/pull/16140
The cost to maintain this seems very small though, and I'd definitely use
it all the time in the repl. In Databricks this is not an issue since the
environment always appends the time, but I really
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16131#discussion_r90986964
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/regression/GeneralizedLinearRegression.scala
---
@@ -505,7 +505,8 @@ object GeneralizedLinearRegression
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/16131#discussion_r90986789
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/regression/GeneralizedLinearRegressionSuite.scala
---
@@ -497,6 +500,7 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16156
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16156
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69688/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16156
**[Test build #69688 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69688/consoleFull)**
for PR 16156 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15722
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/15722
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69689/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15722
**[Test build #69689 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69689/consoleFull)**
for PR 15722 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/16030
I'm checking whether the original behavior is consistent (do we always
respect data schema column order when partition columns are included in data
schema?). If not, I call it a bug and we just
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/16030
yeah, that's what worries me. but does that merit keeping bad inconsistent
behavior forever? Maybe a dev list question?
---
If your project is set up for it, you can reply to this email and have
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16155
**[Test build #69700 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69700/consoleFull)**
for PR 16155 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/16030
@brkyvz I agree that always moving all partitioned columns to the end of
the schema is more consistent and intuitive. However, users may have
ordinal-dependent code like this:
```scala
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16155
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69686/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16155
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16155
**[Test build #69686 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69686/consoleFull)**
for PR 16155 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16162
**[Test build #69699 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69699/consoleFull)**
for PR 16162 at commit
Github user brkyvz commented on the issue:
https://github.com/apache/spark/pull/16030
Even if it is a bug or not, I think we should have a consistent story for
all of these. It's weird that the behavior is different for CatalogTables, for
Vectorized and non-vectorized parquet reader,
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/16162
[SPARK-18729][SS]Move DataFrame.collect out of synchronized block in
MemorySink
## What changes were proposed in this pull request?
Move DataFrame.collect out of synchronized block so
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16156
@liancheng Ah, thank you. I should have tested this first.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16156
Would there be another way to avoid try-catch? I think it is a normal
reading path logic and it seems it might not be safe to rely on exception
handling.
---
If your project is set up for
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16160
**[Test build #69698 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69698/consoleFull)**
for PR 16160 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16158
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69690/
Test PASSed.
---
Github user HyukjinKwon commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90982394
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/16156
Hey @xwu0226 @gatorsmile, did some investigation, and I don't think this is
a bug now. Please refer to [my JIRA comment][1] for more details.
[1]:
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16158
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16158
**[Test build #69690 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69690/consoleFull)**
for PR 16158 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/16160#discussion_r90982137
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ForeachSinkSuite.scala
---
@@ -204,6 +204,55 @@ class ForeachSinkSuite extends
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16154
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69684/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16154
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16154
**[Test build #69684 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69684/consoleFull)**
for PR 16154 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16068
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16068
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69687/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16068
**[Test build #69687 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69687/consoleFull)**
for PR 16068 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16160#discussion_r90980149
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ForeachSinkSuite.scala
---
@@ -204,6 +204,55 @@ class ForeachSinkSuite extends
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/16160#discussion_r90979980
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/ForeachSinkSuite.scala
---
@@ -204,6 +204,55 @@ class ForeachSinkSuite extends
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/14789
bq. but I think the api makes sense to be very similar, or at least in the
same sort of class
I think it will be hard to have the same API serve both use cases. You
could have one API to
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/16155
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/16160#discussion_r90979548
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/ForeachSink.scala
---
@@ -32,46 +31,26 @@ import
Github user xwu0226 commented on the issue:
https://github.com/apache/spark/pull/16156
For normal parquet reader case, we have the following code
```Scala
} else {
logDebug(s"Falling back to parquet-mr")
// ParquetRecordReader returns UnsafeRow
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16160
**[Test build #69697 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69697/consoleFull)**
for PR 16160 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16161
**[Test build #69696 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69696/consoleFull)**
for PR 16161 at commit
GitHub user aray opened a pull request:
https://github.com/apache/spark/pull/16161
[SPARK-18717][SQL] Make code generation for Scala Map work with
immutable.Map also
## What changes were proposed in this pull request?
Fixes compile errors in generated code when user has
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/16160
[SPARK-18721][SS]Fix ForeachSink with watermark + append
## What changes were proposed in this pull request?
Right now ForeachSink creates a new physical plan, so StreamExecution
cannot
Github user xwu0226 commented on the issue:
https://github.com/apache/spark/pull/16156
@liancheng I see. In normal parquet reader, ParquetFileFormat is using
hadoop's `ParquetRecordReader`, which we can not add such toleration code.
---
If your project is set up for it, you can
Github user ChorPangChan commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90976630
--- Diff:
streaming/src/main/java/org/apache/spark/streaming/status/api/v1/BatchStatus.java
---
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16155
**[Test build #69694 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69694/consoleFull)**
for PR 16155 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90977399
--- Diff:
streaming/src/main/java/org/apache/spark/streaming/status/api/v1/BatchStatus.java
---
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16113
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69693/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16113
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16113
**[Test build #69695 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69695/consoleFull)**
for PR 16113 at commit
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/11692
Wow this is old. @WangTaoTheTonic I'm not sure this is the right fix. The
code in YarnClientSchedulerBackend is already catching InterruptedException:
```
try {
val
Github user zsxwing closed the pull request at:
https://github.com/apache/spark/pull/16153
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/16153
Merging to 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or
Github user wangmiao1981 commented on the issue:
https://github.com/apache/spark/pull/16150
cc @felixcheung
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16113
**[Test build #69693 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69693/consoleFull)**
for PR 16113 at commit
Github user michalsenkyr commented on a diff in the pull request:
https://github.com/apache/spark/pull/16157#discussion_r90975179
--- Diff: docs/programming-guide.md ---
@@ -347,7 +347,7 @@ Some notes on reading files with Spark:
Apart from text files, Spark's Scala API
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16153
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16153
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/69682/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16153
**[Test build #69682 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69682/consoleFull)**
for PR 16153 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/16156
@xwu0226 Just tested that this issue also affects the normal Parquet reader
(by setting `spark.sql.parquet.enableVectorizedReader` to `false`). That's also
why #9940 couldn't take a similar
Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90974262
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -578,4 +578,66 @@ class
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90974235
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90970052
--- Diff:
streaming/src/main/java/org/apache/spark/streaming/status/api/v1/BatchStatus.java
---
@@ -0,0 +1,30 @@
+/*
+ * Licensed to the Apache
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90972417
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/SecurityFilter.scala
---
@@ -0,0 +1,37 @@
+/*
+ * Licensed to the Apache
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90973570
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/ui/StreamingJobProgressListener.scala
---
@@ -39,6 +39,8 @@ private[streaming] class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90971231
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/AllReceiversResource.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90970807
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/StreamingListener.scala
---
@@ -66,6 +69,9 @@ case class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90970255
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala ---
@@ -45,7 +45,7 @@ import org.apache.spark.storage.StorageLevel
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90971580
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/AllReceiversResource.scala
---
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90971838
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/JacksonMessageWriter.scala
---
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/16000#discussion_r90973292
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/status/api/v1/StreamingApiRootResource.scala
---
@@ -0,0 +1,133 @@
+/*
+ * Licensed to
Github user nchammas commented on the issue:
https://github.com/apache/spark/pull/16151
@davies - Should this also be cherry-picked into 2.0 and 2.1?
I think this config has been there for a while, just without documentation.
ð
---
If your project is set up for it, you
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90973620
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@ public
Github user actuaryzhang commented on the issue:
https://github.com/apache/spark/pull/16131
@srowen @yanboliang
I have updated the code and further cleaned up the test. Please review and
let me know if there is any question. Thanks.
---
If your project is set up for it, you
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/16151
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90972771
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90972713
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -578,4 +578,66 @@ class
Github user davies commented on the issue:
https://github.com/apache/spark/pull/16151
lgtm, merging into master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90972508
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90972121
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@ public
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/16147
It seems some tests are being failed in some cases when running executor as
process (e.g. `local-cluster`). In this case, it fails because simply the
classpath is too long
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16159
**[Test build #69692 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69692/consoleFull)**
for PR 16159 at commit
Github user dongjoon-hyun commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90971885
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/SpecificParquetRecordReaderBase.java
---
@@ -107,7 +107,16 @@
Github user weiqingy commented on the issue:
https://github.com/apache/spark/pull/16159
The references to upgrade sbt-assembly as following:
https://github.com/sbt/sbt-assembly/blob/master/Migration.md
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/16030
@brkyvz I also worry about the behavior change. Let me check whether the
original behavior is by design or by accident. If it is a bug from the very
beginning, then we should just fix it in this
GitHub user weiqingy opened a pull request:
https://github.com/apache/spark/pull/16159
[SPARK-18697][BUILD] Upgrade sbt plugins
## What changes were proposed in this pull request?
This PR is to upgrade sbt plugins. The following sbt plugins will be
upgraded:
```
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/16158#discussion_r90970832
--- Diff:
mllib/src/main/scala/org/apache/spark/ml/tuning/TrainValidationSplit.scala ---
@@ -226,6 +230,29 @@ class TrainValidationSplitModel private[ml] (
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/16157#discussion_r90970609
--- Diff: docs/programming-guide.md ---
@@ -347,7 +347,7 @@ Some notes on reading files with Spark:
Apart from text files, Spark's Scala API also
Github user xwu0226 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90970607
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -578,4 +578,66 @@ class
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/16156
BTW, I think this PR is a cleaner fix than #9940, which introduces a
temporary metadata while merging two `StructType`s and erased it in a later
phase. We may want to remove the hack done in
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/16158
**[Test build #69690 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69690/consoleFull)**
for PR 16158 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/15998
**[Test build #69691 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/69691/consoleFull)**
for PR 15998 at commit
Github user liancheng commented on the issue:
https://github.com/apache/spark/pull/16156
Actually, PR #9940 should have already fixed this issue. I'm checking why
it doesn't work under 2.0.1 for 2.0.2.
---
If your project is set up for it, you can reply to this email and have your
GitHub user hhbyyh opened a pull request:
https://github.com/apache/spark/pull/16158
[SPARK-18724][ML] Add TuningSummary for TrainValidationSplit
## What changes were proposed in this pull request?
jira: https://issues.apache.org/jira/browse/SPARK-18724
Currently
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/15998
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90969603
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -578,4 +578,66 @@ class
Github user xwu0226 commented on a diff in the pull request:
https://github.com/apache/spark/pull/16156#discussion_r90969322
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala
---
@@ -578,4 +578,66 @@ class
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/16157
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
201 - 300 of 511 matches
Mail list logo