Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17765
**[Test build #76251 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76251/testReport)**
for PR 17765 at commit
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113827924
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -825,6 +832,11 @@ class StreamExecution(
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17784
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17784
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76243/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17765
**[Test build #76250 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76250/testReport)**
for PR 17765 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17784
**[Test build #76243 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76243/testReport)**
for PR 17784 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17789
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17789
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76242/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17789
**[Test build #76242 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76242/testReport)**
for PR 17789 at commit
Github user kunalkhamar commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113825863
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -825,6 +832,11 @@ class StreamExecution(
Github user kunalkhamar commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113825791
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -308,6 +311,7 @@ class StreamExecution(
Github user kunalkhamar commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113825741
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -289,6 +291,7 @@ class StreamExecution(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17765
**[Test build #76249 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76249/testReport)**
for PR 17765 at commit
Github user kunalkhamar commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113825644
--- Diff: core/src/main/scala/org/apache/spark/ui/UIUtils.scala ---
@@ -446,7 +446,7 @@ private[spark] object UIUtils extends Logging {
val
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17781
```
hive> create table partTab (a string, b string) PARTITIONED BY (`a,`
string, `b,` string);
OK
```
It is OK to use commas in partition column names
---
If your project
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113825493
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -500,6 +502,69 @@ class StreamSuite extends StreamTest {
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76248/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76248 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76248/testReport)**
for PR 17787 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17791
BTW, please add [BACKPORT-2.0] in your PR title.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17789
yes. See
https://github.com/apache/spark/blob/master/streaming/src/main/scala/org/apache/spark/streaming/Checkpoint.scala#L138
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76247/
Test PASSed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76247 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76247/testReport)**
for PR 17787 at commit
Github user maropu commented on the issue:
https://github.com/apache/spark/pull/17191
@cloud-fan ok, could you check again? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17789
@zsxwing They are compressed ? Interesting ... I never played with spark
streaming unfortunately, so did not know !
---
If your project is set up for it, you can reply to this email and have your
Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/17791
Thank you for pining me, @mridulm . :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17789
Streaming checkpoints are on HDFS but don't have an extension :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17789
Sounds good on doing it in separate PR - I am not too worried about
shuffle/blockdata/etc btw - since they are private to application execution -
checkpoint's tend to also be perused for other
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17789
In addition, I agree that having an extension and separating the codecs are
good ideas. But they should be done in other PRs to not introduce multiple
features in a large PR.
---
If your project
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17789
Shuffle and cache files are not on hdfs :-) They do not potentially survive
the application or be consumed OOB for recovery/inspection.
---
If your project is set up for it, you can reply to this
Github user hvanhovell commented on the issue:
https://github.com/apache/spark/pull/17791
This is very similar to https://github.com/apache/spark/pull/16804/files
however that approach is like this one is slightly broken (because it does not
support nested char/varchar columns), can
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17786
I agree with @mgummelt, maxCores should have reflected user config option.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17789
> A question I had even with the earlier PR was - should we add the
extension to either the directory or the file indicating compression type ?
Shuffle and cache files don't have an
Github user wangmiao1981 commented on a diff in the pull request:
https://github.com/apache/spark/pull/17640#discussion_r113823246
--- Diff: R/pkg/R/serialize.R ---
@@ -83,6 +83,7 @@ writeObject <- function(con, object, writeType = TRUE) {
Date = writeDate(con,
Github user mridulm commented on the issue:
https://github.com/apache/spark/pull/17791
+CC @dongjoon-hyun - since you were looking at ORC.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user mridulm commented on a diff in the pull request:
https://github.com/apache/spark/pull/17789#discussion_r113822807
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ReliableCheckpointRDD.scala ---
@@ -169,7 +177,12 @@ private[spark] object ReliableCheckpointRDD extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76248 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76248/testReport)**
for PR 17787 at commit
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76247 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76247/testReport)**
for PR 17787 at commit
Github user aramesh117 commented on the issue:
https://github.com/apache/spark/pull/17789
@zsxwing Sorry for the delay! Thank you so much for your review and I saw a
bit of your patch - it looks very nice. I have just one question - would it be
a good idea to separate the codecs for
Github user mgummelt commented on the issue:
https://github.com/apache/spark/pull/17786
@dbtsai Can we please wait to get a LGTM from one of the active Mesos
contributers (@skonto, @tnachen, myself, etc.) before merging Mesos code?
I would have rather this be solved in the
Github user markgrover commented on a diff in the pull request:
https://github.com/apache/spark/pull/17790#discussion_r113817810
--- Diff: pom.xml ---
@@ -136,7 +136,7 @@
10.12.1.1
1.8.2
1.6.0
-9.2.16.v20160414
+9.3.11.v20160721
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17765
**[Test build #76246 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76246/testReport)**
for PR 17765 at commit
Github user kunalkhamar commented on a diff in the pull request:
https://github.com/apache/spark/pull/17765#discussion_r113817633
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -252,6 +252,7 @@ class StreamExecution(
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17765
**[Test build #76245 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76245/testReport)**
for PR 17765 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17791
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user umehrot2 opened a pull request:
https://github.com/apache/spark/pull/17791
Fix reading of HIVE ORC table with varchar/char columns in Spark SQL should
not fail
## What changes were proposed in this pull request?
Reading from a Hive ORC table containing
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17781
Creating views in Hive does not have such an issue.
```
hive> create table tab2 (a string, b string);
OK
Time taken: 0.807 seconds
hive> create view view2 (`a,`, b) as SELECT
Github user robert3005 commented on the issue:
https://github.com/apache/spark/pull/16648
@bdrillard if you don't have time to finish this up I am happy to update
this to latest. I would really like to see this fixed since it's silly that you
can't have more than 3k columns
---
If
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17790
cc @yhuai
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if
Github user vanzin commented on the issue:
https://github.com/apache/spark/pull/17790
> it will impact the classpath for user apps
Jetty is shaded in Spark.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user srowen commented on the issue:
https://github.com/apache/spark/pull/17790
CC @zsxwing as it touches Jetty.
This is probably OK, even though it will impact the classpath for user apps
and Jetty could have some non-trivial changes. I agree there's good reason to
do this
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/17790#discussion_r113809376
--- Diff: pom.xml ---
@@ -136,7 +136,7 @@
10.12.1.1
1.8.2
1.6.0
-9.2.16.v20160414
+9.3.11.v20160721
--- End
Github user dbtsai commented on a diff in the pull request:
https://github.com/apache/spark/pull/17786#discussion_r113809015
--- Diff:
resource-managers/mesos/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosCoarseGrainedSchedulerBackend.scala
---
@@ -60,8 +60,16 @@
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17790
**[Test build #76244 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76244/testReport)**
for PR 17790 at commit
Github user markgrover commented on the issue:
https://github.com/apache/spark/pull/17790
I am running unit tests locally and while the run hasn't finished, it's
looking pretty good so far. I think it'd be good to get Jenkins to run the
tests here anyways so putting a PR sooner than
GitHub user markgrover opened a pull request:
https://github.com/apache/spark/pull/17790
[SPARK-20514][CORE] Upgrade Jetty to 9.3.13.v20161014
Upgrade Jetty so it can work with Hadoop 3 (alpha 2 release, in particular).
Without this change, because of incompatibily between Jetty
Github user HyukjinKwon commented on the issue:
https://github.com/apache/spark/pull/17785
cc @felixcheung, @hvanhovell and @gatorsmile. Could you take a look please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17784
**[Test build #76243 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76243/testReport)**
for PR 17784 at commit
Github user HyukjinKwon closed the pull request at:
https://github.com/apache/spark/pull/17785
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user HyukjinKwon reopened a pull request:
https://github.com/apache/spark/pull/17785
[SPARK-20493][R] De-deuplicate parse logics for DDL-like type strings in R
## What changes were proposed in this pull request?
It seems we are using `SQLUtils.getSQLDataType` for
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17752
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/17752
LGTM. Merging to master and 2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17761
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tdas commented on the issue:
https://github.com/apache/spark/pull/17761
LGTM. Merging this to master and 2.2
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17789
**[Test build #76242 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76242/testReport)**
for PR 17789 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17715
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17024
@aramesh117 I just opened #17789 to finish the rest work. All credits will
go to you when merging the new PR.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17789
cc @mridulm since you reviewed the initial PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/17789
[SPARK-19525][CORE]Add RDD checkpoint compression support
## What changes were proposed in this pull request?
This PR adds RDD checkpoint compression support and add a new config
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17715
LGTM. Merged into master and branch-2.2
Thanks @yanboliang for delivering this big feature which is very useful for
many practical use-cases in the industry.
Thanks @WeichenXu123
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17781
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76238/
Test FAILed.
---
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17781
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17781
**[Test build #76238 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76238/testReport)**
for PR 17781 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/17788
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17786
Tests are added in a followup PR. https://github.com/apache/spark/pull/17788
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17788
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76240/
Test PASSed.
---
Github user dbtsai commented on the issue:
https://github.com/apache/spark/pull/17788
LGTM. Merged into master and branch 2.1. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17788
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76241 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76241/testReport)**
for PR 17787 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76241/
Test FAILed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17788
**[Test build #76240 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76240/testReport)**
for PR 17788 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76241 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76241/testReport)**
for PR 17787 at commit
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17787#discussion_r113792381
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +108,31 @@ class KafkaSinkSuite
Github user anabranch commented on a diff in the pull request:
https://github.com/apache/spark/pull/17787#discussion_r113792301
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +108,31 @@ class KafkaSinkSuite
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17788
**[Test build #76240 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76240/testReport)**
for PR 17788 at commit
Github user dgshep commented on the issue:
https://github.com/apache/spark/pull/17788
Jenkins, ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
GitHub user dgshep opened a pull request:
https://github.com/apache/spark/pull/17788
[SPARK-20483][MINOR] Add test for case
## What changes were proposed in this pull request?
Add test case for scenarios where executor.cores is set as a
(non)divisor of spark.cores.max
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/76239/
Test PASSed.
---
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76239 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76239/testReport)**
for PR 17787 at commit
Github user AmplabJenkins commented on the issue:
https://github.com/apache/spark/pull/17787
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user jisookim0513 closed the pull request at:
https://github.com/apache/spark/pull/16714
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jisookim0513 commented on the issue:
https://github.com/apache/spark/pull/16714
Ok, not including the updated blocks in task metrics reduced the size of
our event logs. But I am closing this PR as the current implementation doesn't
seem to be in the right way. Thanks for
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17787#discussion_r113788603
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +108,31 @@ class KafkaSinkSuite extends
Github user brkyvz commented on a diff in the pull request:
https://github.com/apache/spark/pull/17787#discussion_r113788434
--- Diff:
external/kafka-0-10-sql/src/test/scala/org/apache/spark/sql/kafka010/KafkaSinkSuite.scala
---
@@ -108,6 +108,31 @@ class KafkaSinkSuite extends
Github user SparkQA commented on the issue:
https://github.com/apache/spark/pull/17787
**[Test build #76239 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/76239/testReport)**
for PR 17787 at commit
Github user gatorsmile commented on the issue:
https://github.com/apache/spark/pull/17781
test this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so,
Github user hhbyyh commented on a diff in the pull request:
https://github.com/apache/spark/pull/17130#discussion_r113783993
--- Diff: mllib/src/main/scala/org/apache/spark/ml/fpm/FPGrowth.scala ---
@@ -268,12 +269,8 @@ class FPGrowthModel private[ml] (
val predictUDF =
Github user ajbozarth commented on the issue:
https://github.com/apache/spark/pull/16714
Unless there's still an issue with file size I think I'm good without this,
but I'll defer to @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply
101 - 200 of 434 matches
Mail list logo