Github user BryanCutler commented on the pull request:
https://github.com/apache/spark/pull/10602#issuecomment-181506578
Hi @somideshmukh , any update for this? Let me know if you need any
assistance.
---
If your project is set up for it, you can reply to this email and have your
Github user mariobriggs commented on a diff in the pull request:
https://github.com/apache/spark/pull/10953#discussion_r52203773
--- Diff:
external/kafka/src/main/scala/org/apache/spark/streaming/kafka/KafkaRDDBase.scala
---
@@ -0,0 +1,105 @@
+/*
+ * Licensed to the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/7#issuecomment-181509313
**[Test build #50927 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50927/consoleFull)**
for PR 7 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11106#issuecomment-181512242
**[Test build #50930 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50930/consoleFull)**
for PR 11106 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/5#issuecomment-181514492
/cc @andrewor14, we should look at this PR since I know that we discussed
making the metric-style accumulators a public API while working on that patch.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/10948#issuecomment-181518205
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/10780#issuecomment-181485011
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/10780#discussion_r52197831
--- Diff: network/yarn/pom.xml ---
@@ -86,6 +88,15 @@
+
+
+
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11051#issuecomment-181497827
**[Test build #2523 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2523/consoleFull)**
for PR 11051 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181500981
**[Test build #2524 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2524/consoleFull)**
for PR 8 at commit
Github user mariobriggs commented on a diff in the pull request:
https://github.com/apache/spark/pull/10953#discussion_r52202954
--- Diff:
external/kafka/src/main/scala/org/apache/spark/streaming/kafka/NewKafkaCluster.scala
---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/10948#issuecomment-181513874
**[Test build #50931 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50931/consoleFull)**
for PR 10948 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/9#issuecomment-181571347
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/9#issuecomment-181571159
**[Test build #50935 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50935/consoleFull)**
for PR 9 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/9#issuecomment-181571346
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11121#issuecomment-181578382
**[Test build #50940 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50940/consoleFull)**
for PR 11121 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11121#issuecomment-181578978
**[Test build #50940 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50940/consoleFull)**
for PR 11121 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11121#issuecomment-181578995
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/11109#issuecomment-181581831
Now that https://github.com/apache/spark/pull/11025 has been merged,
jenkins retest this please.
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/9#issuecomment-181556332
**[Test build #50935 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50935/consoleFull)**
for PR 9 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11034#issuecomment-181565416
**[Test build #50937 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50937/consoleFull)**
for PR 11034 at commit
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52224224
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -185,6 +185,17 @@ final class DataFrameWriter private[sql](df:
GitHub user gatorsmile opened a pull request:
https://github.com/apache/spark/pull/11120
[SPARK-13235] [SQL] Removed an Extra Distinct from the Plan with Union
Distinct
Currently, the parser added two `Distinct` operators in the plan if we are
using `Union Distinct`. This PR is to
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11120#issuecomment-181572532
**[Test build #50938 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50938/consoleFull)**
for PR 11120 at commit
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10958#discussion_r52227516
--- Diff: core/src/main/scala/org/apache/spark/Accumulator.scala ---
@@ -60,19 +60,20 @@ import org.apache.spark.storage.{BlockId, BlockStatus}
*
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10958#discussion_r52227700
--- Diff: core/src/main/scala/org/apache/spark/TaskEndReason.scala ---
@@ -118,7 +118,7 @@ case class ExceptionFailure(
description: String,
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10958#discussion_r52227733
--- Diff: core/src/main/scala/org/apache/spark/TaskEndReason.scala ---
@@ -118,7 +118,7 @@ case class ExceptionFailure(
description: String,
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/9#issuecomment-181573228
re the first question - I don't think this necessarily needs to be a code
generated param (although if we do end up having more shared params with
templated types we
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10958#discussion_r52228150
--- Diff:
core/src/test/scala/org/apache/spark/InternalAccumulatorSuite.scala ---
@@ -220,7 +220,7 @@ class InternalAccumulatorSuite extends SparkFunSuite
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52230832
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor {
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52230982
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor {
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/11057#issuecomment-181581662
Ok, makes sense. I guess the CSS must be handling setting the width then?
I'll try to take a closer look tomorrow, the documentation for the datatables
on the
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52211302
--- Diff: sql/core/src/test/scala/org/apache/spark/sql/StreamTest.scala ---
@@ -343,4 +431,58 @@ trait StreamTest extends QueryTest with Timeouts {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11083#issuecomment-181529650
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11083#issuecomment-181529648
Merged build finished. Test PASSed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11083#issuecomment-181529198
**[Test build #50928 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50928/consoleFull)**
for PR 11083 at commit
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52211976
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -198,14 +257,46 @@ class StreamExecution(
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11034#issuecomment-181529265
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52212720
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ContinuousQueryException.scala ---
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3#issuecomment-181532871
**[Test build #50934 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50934/consoleFull)**
for PR 3 at commit
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52214076
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor {
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181537586
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/1#issuecomment-181537630
This looks good to me, and it looks like we synchronize on the same object
that the deprecated SynchronizedQueue did so it should still be ok for peoples
code that was
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181537583
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11106#issuecomment-181540969
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/11095#discussion_r52216780
--- Diff:
core/src/test/java/org/apache/spark/shuffle/sort/ShuffleInMemorySorterSuite.java
---
@@ -75,6 +75,9 @@ public void testBasicSorting() throws
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/11104#issuecomment-181541571
So I think the `putIfAbsent` API might do what your looking for there.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52219962
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor {
Github user steveloughran commented on a diff in the pull request:
https://github.com/apache/spark/pull/8#discussion_r52220649
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/ApplicationCache.scala ---
@@ -0,0 +1,669 @@
+/*
+ * Licensed to the Apache
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/11034#issuecomment-181562746
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/11055#issuecomment-181565008
I tried this patch with ss_max query, it failed with:
```
java.lang.NullPointerException
at
Github user sethah commented on a diff in the pull request:
https://github.com/apache/spark/pull/11099#discussion_r52225995
--- Diff: python/pyspark/ml/param/__init__.py ---
@@ -49,11 +53,21 @@ def _copy_new_parent(self, parent):
else:
raise
Github user ajbozarth commented on the pull request:
https://github.com/apache/spark/pull/11057#issuecomment-181569994
Better clarification:
Currently the column widths are hard-coded for the content of the whole
table, irrelevant of the pagination.
Now the columns widths fit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11099#issuecomment-181573760
**[Test build #50939 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50939/consoleFull)**
for PR 11099 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52228038
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor {
Github user huaxingao commented on a diff in the pull request:
https://github.com/apache/spark/pull/11104#discussion_r52228031
--- Diff:
external/kafka/src/test/scala/org/apache/spark/streaming/kafka/KafkaStreamSuite.scala
---
@@ -65,12 +67,14 @@ class KafkaStreamSuite extends
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/9#discussion_r52231326
--- Diff: mllib/src/main/scala/org/apache/spark/ml/clustering/KMeans.scala
---
@@ -237,6 +237,27 @@ class KMeans @Since("1.5.0") (
@Since("1.5.0")
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181562007
**[Test build #50936 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50936/consoleFull)**
for PR 8 at commit
Github user ajbozarth commented on the pull request:
https://github.com/apache/spark/pull/11057#issuecomment-181568845
A long name will still make the table look odd (column widths) as seen in
the second picture, but now it only affects the page that the app with the long
name is on.
GitHub user JoshRosen opened a pull request:
https://github.com/apache/spark/pull/11121
[SPARK-12503][SPARK-12505] Limit pushdown in UNION ALL and OUTER JOIN
This patch adds a new optimizer rule for performing limit pushdown. Limits
will now be pushed down in two cases:
-
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10958#discussion_r52228242
--- Diff: project/MimaExcludes.scala ---
@@ -183,7 +183,8 @@ object MimaExcludes {
) ++ Seq(
// SPARK-12896 Send only accumulator
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/10958#issuecomment-181574664
LGTM. Sorry for the delay in reviewing.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11121#issuecomment-181578991
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user hvanhovell commented on the pull request:
https://github.com/apache/spark/pull/11083#issuecomment-181493342
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user thunterdb commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52202089
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ContinuousQuery.scala ---
@@ -17,11 +17,47 @@
package org.apache.spark.sql
Github user squito commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181499494
Reviewers: note this was done primarily by @steveloughran , for now just
posting this as a potential simplification to consider vs.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/11083#issuecomment-181499226
**[Test build #50928 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50928/consoleFull)**
for PR 11083 at commit
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/6935#discussion_r52202867
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -415,8 +488,59 @@ private[history] class FsHistoryProvider(conf:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181508356
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181508283
**[Test build #50929 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/50929/consoleFull)**
for PR 8 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181508354
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/7334#discussion_r52209170
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user ajbozarth commented on the pull request:
https://github.com/apache/spark/pull/11029#issuecomment-181521912
@tgravescs also a fix on the DataTables addition if you want to take a look
---
If your project is set up for it, you can reply to this email and have your
reply
Github user ajbozarth commented on the pull request:
https://github.com/apache/spark/pull/11038#issuecomment-181521289
@tgravescs since you did a lot of review on the DataTables addition do you
want to take a look at this?
---
If your project is set up for it, you can reply to this
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/11034#discussion_r52209936
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
---
@@ -0,0 +1,319 @@
+/*
+ * Licensed to the Apache
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52212645
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/ContinuousQuery.scala ---
@@ -17,11 +17,47 @@
package org.apache.spark.sql
+import
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/3#issuecomment-181530951
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/7334#discussion_r52213727
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala ---
@@ -0,0 +1,122 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52214655
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -39,7 +39,7 @@ case class
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/11057#issuecomment-181536564
maybe I missed it, what happens to the long application name with this
change? Does it get truncated or wrapped?
---
If your project is set up for it, you can reply
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/1#issuecomment-181538552
Agreed, I also just compared it with the SynchronizedQueue sources and
behaviour should be identical. looks good
---
If your project is set up for it, you can reply
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/11105#issuecomment-181541435
cc @andrewor14 @squito @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user nongli commented on the pull request:
https://github.com/apache/spark/pull/10965#issuecomment-181520467
The benchmark LGTM and I think this is useful.
@maropu Before you make significant changes to this, can you write up what
you plan to do?
---
If your project
Github user ajbozarth commented on the pull request:
https://github.com/apache/spark/pull/11057#issuecomment-181522448
@tgravescs a third fix on the DataTables addition, just to swamp you with
small requests (sorry)
---
If your project is set up for it, you can reply to this email
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52211402
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -55,9 +59,89 @@ class StreamExecution(
private
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/11034#issuecomment-181529260
Merged build finished. Test FAILed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52212788
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameWriter.scala ---
@@ -185,6 +185,17 @@ final class DataFrameWriter private[sql](df:
DataFrame)
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/7334#issuecomment-181533917
LGTM, merging this into master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/8#issuecomment-181535668
**[Test build #2525 has
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/2525/consoleFull)**
for PR 8 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/7334
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jodersky commented on a diff in the pull request:
https://github.com/apache/spark/pull/3#discussion_r52214846
--- Diff:
examples/src/main/scala/org/apache/spark/examples/streaming/ActorWordCount.scala
---
@@ -63,11 +63,11 @@ class FeederActor extends Actor {
Github user holdenk commented on a diff in the pull request:
https://github.com/apache/spark/pull/11104#discussion_r52215569
--- Diff:
external/kafka/src/test/scala/org/apache/spark/streaming/kafka/KafkaStreamSuite.scala
---
@@ -65,12 +67,14 @@ class KafkaStreamSuite extends
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/11095
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user yinxusen opened a pull request:
https://github.com/apache/spark/pull/9
[SPARK-10780][ML][WIP] Add initial model to kmeans
https://issues.apache.org/jira/browse/SPARK-10780
I mark it as WIP because there are several issues need to discuss:
1. Codegen
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/11025#issuecomment-181523197
Merged into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user jodersky commented on the pull request:
https://github.com/apache/spark/pull/10984#issuecomment-181524146
Looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/11025
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user huaxingao commented on the pull request:
https://github.com/apache/spark/pull/11104#issuecomment-181528520
@holdenk
Could you please review one more time?
I changed to java api except the getOrElseUpdate in KafkaStreamSuite.scala.
I can't find a java equivalent
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/11030#discussion_r52211805
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/StreamExecution.scala
---
@@ -150,36 +205,40 @@ class StreamExecution(
Github user holdenk commented on the pull request:
https://github.com/apache/spark/pull/11104#issuecomment-181537776
Sure I'll take another look.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
201 - 300 of 407 matches
Mail list logo