Github user lianhuiwang commented on a diff in the pull request:
https://github.com/apache/spark/pull/4066#discussion_r24315166
--- Diff: core/src/main/scala/org/apache/spark/SparkHadoopWriter.scala ---
@@ -105,24 +106,61 @@ class SparkHadoopWriter(@transient jobConf: JobConf)
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4467#issuecomment-73476913
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4469#issuecomment-73477340
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user saucam commented on a diff in the pull request:
https://github.com/apache/spark/pull/4469#discussion_r24315891
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/types/dataTypes.scala ---
@@ -362,7 +362,7 @@ case object BooleanType extends NativeType with
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4467#issuecomment-73476905
[Test build #27091 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27091/consoleFull)
for PR 4467 at commit
GitHub user saucam opened a pull request:
https://github.com/apache/spark/pull/4469
SPARK-5684: Pass in partition name along with location information, as the
location can be different (that is may not contain the partition keys)
While parsing the partition keys from the locations,
Github user saucam commented on a diff in the pull request:
https://github.com/apache/spark/pull/4469#discussion_r24316073
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -310,7 +310,10 @@ class SQLContext(@transient val sparkContext:
SparkContext)
Github user saucam commented on the pull request:
https://github.com/apache/spark/pull/4469#issuecomment-73478985
@liancheng please suggest ...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4412#discussion_r24316626
--- Diff: core/src/main/scala/org/apache/spark/SparkEnv.scala ---
@@ -93,6 +93,19 @@ class SparkEnv (
// actorSystem.awaitTermination()
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4468#issuecomment-73480882
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4468#issuecomment-73480872
[Test build #27092 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27092/consoleFull)
for PR 4468 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4468#issuecomment-73481187
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4468#issuecomment-73481177
[Test build #27093 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27093/consoleFull)
for PR 4468 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4262
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/4470
SPARK-5239 [CORE] JdbcRDD throws java.lang.AbstractMethodError:
oracle.jdbc.driver.xx.isClosed()Z
This is a completion of https://github.com/apache/spark/pull/4033 which was
withdrawn for some
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4470#issuecomment-73482230
[Test build #27094 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27094/consoleFull)
for PR 4470 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73523508
[Test build #27104 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27104/consoleFull)
for PR 4472 at commit
Github user kai-zeng commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73523394
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user OopsOutOfMemory opened a pull request:
https://github.com/apache/spark/pull/4473
[SPARK-5651][SQL] Support db.table in Create Table within backticks of
HiveContext
Support:
```scala
create table `table_in_database_creation.test2` as select * from src limit
1;
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4475#issuecomment-73537910
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73538427
[Test build #27105 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27105/consoleFull)
for PR 4472 at commit
GitHub user edenovit opened a pull request:
https://github.com/apache/spark/pull/4475
https://issues.apache.org/jira/browse/SPARK-5688
When choosing the subset of categories for categorical variables, this was
not done randomly.
You can merge this pull request into a Git
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73523672
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/3976#issuecomment-73538842
I thought we had it documented somewhere in what modes different things ran
but I'm not seeing it in the documentation. I think I will file jira to add
some
Github user OopsOutOfMemory commented on a diff in the pull request:
https://github.com/apache/spark/pull/4427#discussion_r24334944
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -397,6 +397,13 @@ private[hive] object HiveQl {
protected def
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4473#issuecomment-73526210
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user lianhuiwang opened a pull request:
https://github.com/apache/spark/pull/4474
[SPARK-5687][Core]TaskResultGetter need to catch OutOfMemoryError.
because in enqueueSuccessfulTask there is another thread to fetch result,
if result is large,it maybe throw a
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/3525#issuecomment-73535044
before i think that both of two configs can be existed. from @tgravescs i
think overhead is more necessary than OverheadFraction , because at some time
it has very
Github user MiguelPeralvo commented on the pull request:
https://github.com/apache/spark/pull/4457#issuecomment-73536251
@nchammas,
Regarding the help text, I'd say that it applies to the destroy, login,
reboot-slaves, get-master, stop and start options, not only launch.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73524507
[Test build #27105 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27105/consoleFull)
for PR 4472 at commit
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/4292#issuecomment-73537266
sorry for my delay, I was out last week.
I would have to agree with @JoshRosen last comment. If they have already
created the RDD then I wouldn't expect any
Github user luogankun commented on the pull request:
https://github.com/apache/spark/pull/4033#issuecomment-73538418
The stacktrace message of all failure testcase are:
`sbt.ForkMain$ForkError: Futures timed out after [1 minute]`, whether related
Jenkins?
---
If your project is set
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4475#issuecomment-73542338
(Add `[MLLIB]` after `SPARK-5688` in the title?)
I think there is an issue here, but maybe a simpler solution in the short
term. Since you can't test all 2^n - 2
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/3525#issuecomment-73523911
I'm still not on board with having 2 different configs. You then have to
explain to the user what they are, which one takes presendence, etc. I can see
cases the %
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4033#issuecomment-73533940
NP I will assign to you @luogankun in any event for the credit. Right now
the new PR fails Hive tests, twice, which is suspicious, though I can't see how
it would affect
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73538438
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4474#issuecomment-73545439
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73523669
[Test build #27104 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27104/consoleFull)
for PR 4472 at commit
Github user OopsOutOfMemory commented on a diff in the pull request:
https://github.com/apache/spark/pull/4427#discussion_r24335234
--- Diff: sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveQl.scala ---
@@ -397,6 +397,13 @@ private[hive] object HiveQl {
protected def
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4474#issuecomment-73530090
[Test build #27106 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27106/consoleFull)
for PR 4474 at commit
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/3976#issuecomment-73541324
@tgravescs your thought is right. but there is just different at Yarn
internal's Client. it is same way on spark-submit with Yarn client and Yarn
cluster.so i think
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/4427#issuecomment-73526018
@watermen
You need to pass test the suite locally and make sure you can generate the
golden file and then commit the files to the test server.
Otherwise,
Github user luogankun commented on the pull request:
https://github.com/apache/spark/pull/4033#issuecomment-73533646
@srowen To do other things, forgot to resubmit a new PR after close the PR.
Thanks for your help.
---
If your project is set up for it, you can reply to this email
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4452#issuecomment-73533792
@vanzin Any more thoughts on this? I feel pretty good about it, as it does
seem a lot right-er, and we have a test case that failed before and worked
after. Also, I got
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4474#issuecomment-73545420
[Test build #27106 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27106/consoleFull)
for PR 4474 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4450#issuecomment-73627450
[Test build #27152 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27152/consoleFull)
for PR 4450 at commit
GitHub user kayousterhout opened a pull request:
https://github.com/apache/spark/pull/4488
SPARK-5701: Only set ShuffleReadMetrics when task has shuffle deps
The updateShuffleReadMetrics method in TaskMetrics (called by the executor
heartbeater) will currently always add a
Github user shenh062326 commented on the pull request:
https://github.com/apache/spark/pull/4363#issuecomment-73625458
Hi @andrewor14 , @sryza and @rxin. Thanks. I agree with your views. I will
change sc.killExecutor to not throw an assertion error.
---
If your project is set up
Github user shenh062326 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4363#discussion_r24381524
--- Diff: core/src/main/scala/org/apache/spark/HeartbeatReceiver.scala ---
@@ -17,33 +17,82 @@
package org.apache.spark
-import
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4468#discussion_r24381522
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala
---
@@ -409,8 +411,26 @@ private[sql] class DataFrameImpl protected[sql](
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/4363#issuecomment-73626671
@andrewor14 @rxin yes, i agree with you. for other mode, later we need to
implement killing executor. so this PR is unify failure detection between
blockmanager
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4168#issuecomment-73625504
@sryza I may be missing something but I don't understand why
`numPendingExecutors` should be rederived from the user calling
`requestTotalExecutors` every time. What
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/4473#issuecomment-73629653
@OopsOutOfMemory @rxin See the end of my
PR(https://github.com/apache/spark/pull/4427), @yhuai
say it is a bug in Hive and it has been fixed by
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73629671
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4472#issuecomment-73629667
[Test build #27142 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27142/consoleFull)
for PR 4472 at commit
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-73630755
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4474#issuecomment-73632020
Yes, if you find a situation where the driver can oom, just let us know, we
can try to fix it.
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4488#issuecomment-73633096
[Test build #27147 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27147/consoleFull)
for PR 4488 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4489#issuecomment-73634246
[Test build #27148 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27148/consoleFull)
for PR 4489 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4489#issuecomment-73634250
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-73637897
[Test build #27158 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27158/consoleFull)
for PR 4382 at commit
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/4473#issuecomment-73637852
@OopsOutOfMemory
```
create table `table_in_database_creation`.`test2` as select * from src
limit 1;
create table `table_in_database_creation`.`test4` (a
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4459#issuecomment-73642221
[Test build #27171 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27171/consoleFull)
for PR 4459 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4427#issuecomment-73642225
[Test build #27172 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27172/consoleFull)
for PR 4427 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4494#issuecomment-73647069
I've merged this. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4446#discussion_r24390027
--- Diff: python/pyspark/sql_tests.py ---
@@ -285,6 +285,38 @@ def test_aggregator(self):
self.assertTrue(95
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4495#issuecomment-73651853
[Test build #27180 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27180/consoleFull)
for PR 4495 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4495#issuecomment-73651771
[Test build #27180 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27180/consoleFull)
for PR 4495 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4421#issuecomment-73625791
Types need to exist, but names don't. They can just be random column names
like _1, _2, _3.
In Scala, if you import sqlContext.implicits._, then any RDD[Product]
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/4367#issuecomment-73625959
yes ,i will close this PR. thanks all.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4168#issuecomment-73628826
@andrewor14 when I mentioned rederiving `numPendingExecutors` I was
actually talking about the way we calculate the version of it that lives in
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4488#issuecomment-73625896
[Test build #27147 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27147/consoleFull)
for PR 4488 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-73637900
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/4427#issuecomment-73637908
@yhuai
Didn't notice this before, thanks.
I think we'd better not change this, but only update the `input46.q`. Is
that ok ?
@watermen
Maybe
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4415#issuecomment-73638010
[Test build #27165 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27165/consoleFull)
for PR 4415 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4257#issuecomment-73639219
[Test build #27166 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27166/consoleFull)
for PR 4257 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4494#issuecomment-73646705
https://github.com/apache/spark/pull/4227/files#diff-7253a38df7e111ecf6b1ef71feba383bL339
---
If your project is set up for it, you can reply to this email and
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4427#issuecomment-73646747
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4446#discussion_r24390259
--- Diff: python/pyspark/sql.py ---
@@ -1889,9 +1931,57 @@ def insertInto(self, tableName, overwrite=False):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4446#issuecomment-73648993
[Test build #27175 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27175/consoleFull)
for PR 4446 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4446#issuecomment-73648996
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/4168#discussion_r24391913
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -224,59 +240,90 @@ private[spark] class ExecutorAllocationManager(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4384#issuecomment-73653296
[Test build #27178 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27178/consoleFull)
for PR 4384 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4384#issuecomment-73653301
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392639
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392638
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392647
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392631
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392624
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392641
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4496#issuecomment-73653660
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4496#issuecomment-73653657
[Test build #27181 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27181/consoleFull)
for PR 4496 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392627
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392636
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392622
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392633
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392644
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4495#discussion_r24392634
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/PowerIterationClusteringExample.scala
---
@@ -0,0 +1,176 @@
+/*
+ * Licensed to the
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4490#issuecomment-73627857
@JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/4490
[SPARK-5703] AllJobsPage throws empty.max exception
If you have a `SparkListenerJobEnd` event without the corresponding
`SparkListenerJobStart` event, then `JobProgressListener` will create an
1 - 100 of 667 matches
Mail list logo