Github user oliverpierson commented on the pull request:
https://github.com/apache/spark/pull/11402#issuecomment-190848996
Sounds good. You can find the new JIRA here
[SPARK-13600](https://issues.apache.org/jira/browse/SPARK-13600).
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-190842071
**[Test build #52249 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52249/consoleFull)**
for PR 11449 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-190842084
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11369#discussion_r54610842
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcherArguments.scala
---
@@ -44,7 +44,7 @@ private[mesos] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11369#discussion_r54610723
--- Diff: core/src/main/resources/org/apache/spark/ui/static/historypage.js
---
@@ -135,7 +135,7 @@ $(document).ready(function() {
}
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/11437#issuecomment-190845510
@nongli There is no visible difference on all existing benchmarks
(ColumnarBatch and ParquetRead), they don't use dictionary encoding.
After changed the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11426#issuecomment-190888991
**[Test build #2597 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2597/consoleFull)**
for PR 11426 at commit
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11446#issuecomment-190841976
#11449 supercedes this
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-190842081
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/9229#discussion_r54613493
--- Diff:
mllib/src/test/scala/org/apache/spark/ml/classification/MultilayerPerceptronClassifierSuite.scala
---
@@ -49,7 +49,48 @@ class
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/9229#discussion_r54613504
--- Diff: mllib/src/test/scala/org/apache/spark/ml/ann/GradientSuite.scala
---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/9229#discussion_r54613519
--- Diff: mllib/src/test/scala/org/apache/spark/ml/ann/GradientSuite.scala
---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/9229#discussion_r54613510
--- Diff: mllib/src/test/scala/org/apache/spark/ml/ann/GradientSuite.scala
---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/9229#discussion_r54613525
--- Diff: mllib/src/test/scala/org/apache/spark/ml/ann/GradientSuite.scala
---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/9229#discussion_r54613520
--- Diff: mllib/src/test/scala/org/apache/spark/ml/ann/GradientSuite.scala
---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11449#issuecomment-190848851
**[Test build #52250 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52250/consoleFull)**
for PR 11449 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11437#issuecomment-190855205
**[Test build #52251 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52251/consoleFull)**
for PR 11437 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11448#issuecomment-190863931
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11438#issuecomment-190880411
Thank you for the testing, @srowen . Unfortunately, it still fails.
For the Scala Unused Imports, I see what you mean.
Then, I'll update Jira issue and
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54628222
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -707,26 +719,33 @@ private[spark] class BlockManager(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10205#issuecomment-190872178
**[Test build #52253 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52253/consoleFull)**
for PR 10205 at commit
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11438#issuecomment-190886825
Thank you, @zsxwing !
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/11440#discussion_r54610922
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/JobGenerator.scala
---
@@ -221,8 +221,12 @@ class JobGenerator(jobScheduler:
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/9229#issuecomment-190849301
I made a quick pass on the unit tests. Will check the implementation later
today.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/8817#issuecomment-190859920
Due to the removal of the ExternalBlockStore API in SPARK-12667, I think
that this is now "Won't Fix", so do you mind closing this PR? Thanks!
---
If your project is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11448#issuecomment-190863929
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/11445#discussion_r54621585
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -267,44 +267,6 @@ def take(self, num):
self._jdf, num)
return
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/11438#issuecomment-190882478
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54619025
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockInfoManager.scala ---
@@ -307,29 +307,48 @@ private[storage] class BlockInfoManager extends
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11320#issuecomment-190874901
Thank you for closing this, @srowen and @mengxr .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54627020
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockInfoManager.scala ---
@@ -307,29 +307,48 @@ private[storage] class BlockInfoManager extends
Github user gatorsmile commented on the pull request:
https://github.com/apache/spark/pull/11441#issuecomment-190883985
Let me close it at first. It sounds like this is specific to
multi-distinct. Let us do it when we fixing SQL generation for multi distinct.
---
If your project is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11426#issuecomment-190881271
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11426#issuecomment-190892373
**[Test build #2598 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2598/consoleFull)**
for PR 11426 at commit
Github user steveloughran closed the pull request at:
https://github.com/apache/spark/pull/11346
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user steveloughran commented on the pull request:
https://github.com/apache/spark/pull/11346#issuecomment-190841889
#11449 should render this obsolete
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user nongli commented on the pull request:
https://github.com/apache/spark/pull/11437#issuecomment-190846489
Cool. Lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54613238
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -315,6 +315,34 @@ abstract class RDD[T: ClassTag](
}
/**
+ *
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/9229#discussion_r54613528
--- Diff: mllib/src/test/scala/org/apache/spark/ml/ann/GradientSuite.scala
---
@@ -0,0 +1,69 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user olarayej commented on the pull request:
https://github.com/apache/spark/pull/11336#issuecomment-190867380
SparkR doesn't support operations between columns from different DataFrame
objects. Yet you can do:
```
c1 <- df1$c1
c2 <- df2$c2
c3 < - c1 + c2
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11426#issuecomment-190881265
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user gatorsmile closed the pull request at:
https://github.com/apache/spark/pull/11441
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user markgrover commented on the pull request:
https://github.com/apache/spark/pull/11143#issuecomment-190845440
Kafka 0.9 still has the old high and low level consumers (and the old
producers too).
Correct but they are not compatible when using the 0.9 client with 0.8
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/11445#discussion_r54620370
--- Diff: python/pyspark/sql/dataframe.py ---
@@ -267,44 +267,6 @@ def take(self, num):
self._jdf, num)
return
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11448#issuecomment-190863461
**[Test build #52247 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52247/consoleFull)**
for PR 11448 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11438#issuecomment-190882980
**[Test build #52254 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52254/consoleFull)**
for PR 11438 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11417#discussion_r54617585
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/QueryPlan.scala
---
@@ -230,8 +230,12 @@ abstract class QueryPlan[PlanType <:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11417#issuecomment-190855193
**[Test build #52252 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52252/consoleFull)**
for PR 11417 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11426#issuecomment-190880968
**[Test build #52248 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52248/consoleFull)**
for PR 11426 at commit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54628030
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -707,26 +719,33 @@ private[spark] class BlockManager(
Github user ijuma commented on the pull request:
https://github.com/apache/spark/pull/11143#issuecomment-190883372
That's right @markgrover, the current approach used by Kafka preserves
compatibility for users, but makes it a bit complicated for libraries/systems
that want to support
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11453#issuecomment-190976635
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
GitHub user nongli opened a pull request:
https://github.com/apache/spark/pull/11454
[SPARK-13574][SQL] Add benchmark to measure string dictionary decode.
## What changes were proposed in this pull request?
Also updated the other benchmarks when the default to use
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54662358
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -707,26 +719,33 @@ private[spark] class BlockManager(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11436#issuecomment-190993072
**[Test build #52271 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52271/consoleFull)**
for PR 11436 at commit
Github user sun-rui commented on the pull request:
https://github.com/apache/spark/pull/11336#issuecomment-190996596
@olarayej,
c3 can be used on a DataFrame that is joined between df1 & df2
df3 <- join(df1, df2)
---
If your project is set up for it, you can reply to this
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11445#issuecomment-191002238
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11445#issuecomment-191002242
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user zjffdu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10242#discussion_r54664922
--- Diff: python/pyspark/ml/clustering.py ---
@@ -291,6 +292,317 @@ def _create_model(self, java_model):
return BisectingKMeansModel(java_model)
Github user ehsanmok commented on the pull request:
https://github.com/apache/spark/pull/9916#issuecomment-190971684
@jkbradley I'd really like to but don't have enough time these days. I'll
add that to my to-do list and will let you know then.
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11063#issuecomment-190976446
**[Test build #2600 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2600/consoleFull)**
for PR 11063 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11453#issuecomment-190976630
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11453#issuecomment-190976619
**[Test build #52264 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52264/consoleFull)**
for PR 11453 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11178#issuecomment-190975675
**[Test build #52265 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52265/consoleFull)**
for PR 11178 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11438#issuecomment-190985880
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11450#issuecomment-190985801
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user nongli commented on a diff in the pull request:
https://github.com/apache/spark/pull/11454#discussion_r54660492
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
---
@@ -219,11 +216,43 @@ object
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11438#issuecomment-190985876
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54661775
--- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala
---
@@ -648,8 +647,38 @@ private[spark] class BlockManager(
}
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11445#issuecomment-190995266
**[Test build #52273 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52273/consoleFull)**
for PR 11445 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11455#issuecomment-190995264
**[Test build #52272 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52272/consoleFull)**
for PR 11455 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10241#issuecomment-190999698
**[Test build #52274 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52274/consoleFull)**
for PR 10241 at commit
Github user olarayej commented on the pull request:
https://github.com/apache/spark/pull/11336#issuecomment-191003387
@sun-rui Yes. In that case, c3 will be only associated to df3.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11063#issuecomment-191009577
**[Test build #2599 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2599/consoleFull)**
for PR 11063 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11063#issuecomment-191013178
**[Test build #2600 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2600/consoleFull)**
for PR 11063 at commit
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/11444#discussion_r54667527
--- Diff: python/pyspark/sql/context.py ---
@@ -63,7 +63,33 @@ def toDF(self, schema=None, sampleRatio=None):
"""
return
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11455#issuecomment-191014184
**[Test build #52279 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52279/consoleFull)**
for PR 11455 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10953#issuecomment-191016300
**[Test build #52280 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52280/consoleFull)**
for PR 10953 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11417#issuecomment-191018513
**[Test build #52268 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52268/consoleFull)**
for PR 11417 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/11452#issuecomment-190975127
I think there's some slight overlap between this and #11178, so it would be
great if you could also review that PR.
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11453#issuecomment-190975690
**[Test build #52264 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52264/consoleFull)**
for PR 11453 at commit
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11454#discussion_r54659878
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetReadBenchmark.scala
---
@@ -72,7 +72,7 @@ object
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11444#discussion_r54661176
--- Diff: python/pyspark/sql/types.py ---
@@ -681,6 +681,139 @@ def __eq__(self, other):
for v in [ArrayType, MapType,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10927#issuecomment-190990898
**[Test build #52270 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52270/consoleFull)**
for PR 10927 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11436#issuecomment-190990900
**[Test build #52269 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52269/consoleFull)**
for PR 11436 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54661867
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockInfoManager.scala ---
@@ -307,29 +307,48 @@ private[storage] class BlockInfoManager extends
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11436#discussion_r54661845
--- Diff: core/src/main/scala/org/apache/spark/rdd/RDD.scala ---
@@ -315,6 +315,34 @@ abstract class RDD[T: ClassTag](
}
/**
+ *
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/11440#issuecomment-190994425
Thanks @jerryshao @srowen @zsxwing for suggestions.I close this PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/11440
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/11455
Sync worker's state after registering with master
## What changes were proposed in this pull request?
Here lists all cases that Master cannot talk with Worker for a while and
then network
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/10241#issuecomment-190998361
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10927#issuecomment-190998258
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10927#issuecomment-190997827
**[Test build #52270 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52270/consoleFull)**
for PR 10927 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11445#issuecomment-191001993
**[Test build #52273 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52273/consoleFull)**
for PR 11445 at commit
Github user zjffdu commented on a diff in the pull request:
https://github.com/apache/spark/pull/10242#discussion_r54665060
--- Diff: python/pyspark/ml/clustering.py ---
@@ -291,6 +292,317 @@ def _create_model(self, java_model):
return BisectingKMeansModel(java_model)
Github user dongjoon-hyun commented on the pull request:
https://github.com/apache/spark/pull/11292#issuecomment-191004045
I'm a big fan of this PR. When will it be merge, @srowen ? :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/11448
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11444#discussion_r54666473
--- Diff: python/pyspark/sql/types.py ---
@@ -681,6 +681,139 @@ def __eq__(self, other):
for v in [ArrayType, MapType,
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/11444#discussion_r54666521
--- Diff: python/pyspark/sql/context.py ---
@@ -63,7 +63,33 @@ def toDF(self, schema=None, sampleRatio=None):
"""
return
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11371#issuecomment-191012390
**[Test build #52278 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52278/consoleFull)**
for PR 11371 at commit
1 - 100 of 582 matches
Mail list logo