Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18107#discussion_r118804254
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -719,3 +745,23 @@ object ThrowingInterruptedIOException
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18107#discussion_r119014511
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -47,50 +44,54 @@ trait StateStore
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18107#discussion_r118615627
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -61,11 +60,24 @@ trait StateStoreReader extends
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18107#discussion_r119014924
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -165,54 +189,88 @@ case class StateStoreSaveExec
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18107#discussion_r119014948
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/statefulOperators.scala
---
@@ -165,54 +189,88 @@ case class StateStoreSaveExec
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18107#discussion_r119013976
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -102,28 +103,100 @@ trait StateStore
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18107#discussion_r119014486
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/StateStore.scala
---
@@ -47,50 +44,54 @@ trait StateStore
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17308
LGTM. Merging to master and 2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18135
LGTM. Merging to master and 2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18126
Thanks! Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18126#discussion_r118805346
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverRunner.scala ---
@@ -57,7 +57,8 @@ private[deploy] class DriverRunner(
@volatile
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18126
> 10s is pretty short for a driver timeout
This is usually not a problem. If worker is trying to kill a driver, it
often means the driver is unhealthy or being killed by the u
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r118799350
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
---
@@ -0,0 +1,101 @@
+/*
+ * Licensed
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r118798407
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaWriteTask.scala
---
@@ -68,11 +67,10 @@ private[kafka010] class
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r118799879
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
---
@@ -0,0 +1,101 @@
+/*
+ * Licensed
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18126
This is the behavior in 2.1.0, if we change the default value to
`Long.MaxValue`, it would surprise users again :(.
I'm inclined to keep it as 2.1.0.
---
If your project is set up
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18126
cc @vanzin @BryanCutler
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/18126
[SPARK-20843][Core]Add a config to set driver terminate timeout
## What changes were proposed in this pull request?
Add a worker configuration to set how long to wait before force killing
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17343
LGTM. Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18065
LGTM. Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18119
@kiszk I just pushed #17087 directly to branch-2.2 since there is no
conflict. Could you close this one?
In addition, I noticed that the test outputs too many logs, could you
submit a PR
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/11746#discussion_r118757391
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverRunner.scala ---
@@ -53,9 +53,11 @@ private[deploy] class DriverRunner(
@volatile
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17343
LGTM pending tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17343#discussion_r118755984
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -339,23 +355,26 @@ void forceSorterToSpill() throws IOException
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17343#discussion_r118644650
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -339,23 +355,26 @@ void forceSorterToSpill() throws IOException
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17343#discussion_r118644725
--- Diff:
core/src/main/java/org/apache/spark/shuffle/sort/UnsafeShuffleWriter.java ---
@@ -339,23 +356,27 @@ void forceSorterToSpill() throws IOException
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18064#discussion_r118607514
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -965,14 +965,20 @@ class SQLQuerySuite extends QueryTest
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18065#discussion_r118555796
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamReader.scala
---
@@ -35,7 +35,6 @@ import org.apache.spark.sql.types.StructType
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/11746#discussion_r118544204
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/DriverRunner.scala ---
@@ -53,9 +53,11 @@ private[deploy] class DriverRunner(
@volatile
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18101
Thanks. Merging to master, 2.2 and 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/18101
[SPARK-20874][Examples]Add Structured Streaming Kafka Source to examples
project
## What changes were proposed in this pull request?
Add Structured Streaming Kafka Source to the `examples
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18064#discussion_r118391216
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/FileFormatWriter.scala
---
@@ -161,50 +161,50 @@ object FileFormatWriter
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18064#discussion_r118391357
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala
---
@@ -965,14 +965,20 @@ class SQLQuerySuite extends QueryTest
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r118388741
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
---
@@ -0,0 +1,174 @@
+/*
+ * Licensed
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18073#discussion_r118384461
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -495,6 +496,8 @@ object
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18073#discussion_r118356467
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -495,6 +496,8 @@ object
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/18073#discussion_r118356094
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFileFormat.scala
---
@@ -495,6 +496,8 @@ object
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17308#discussion_r118338475
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/CachedKafkaProducer.scala
---
@@ -0,0 +1,174 @@
+/*
+ * Licensed
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18024
LGTM. Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17763
Thanks! Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17763
@yhuai could you take a look at this one since you reviewed the previous
PR, please?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18021
Thanks! Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/18021
cc @JoshRosen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/18021
[SPARK-20788][Core]Fix the Executor task reaper's false alarm warning logs
## What changes were proposed in this pull request?
Executor task reaper may fail to detect if a task is finished
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17087
LGTM. Thanks! Merging to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17763
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17821
Thanks! Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r116615854
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -353,9 +356,28 @@ abstract class SparkPlan extends QueryPlan
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r116612707
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -353,9 +356,28 @@ abstract class SparkPlan extends QueryPlan
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r116611382
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -899,8 +902,16 @@ object CodeGenerator
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17957
LGTM. Merging to master and 2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17958
LGTM. Merging to master and 2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17958#discussion_r116292013
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -202,13 +203,22 @@ private
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17958#discussion_r116289648
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -202,13 +203,22 @@ private
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17958#discussion_r116288183
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/state/HDFSBackedStateStoreProvider.scala
---
@@ -202,13 +203,22 @@ private
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17954
LGTM. Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17942
Thanks! Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r116143769
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -115,26 +115,33 @@ private[spark] abstract class Task[T](
case t
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r116063045
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -115,26 +115,33 @@ private[spark] abstract class Task[T](
case t
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r116062195
--- Diff: core/src/main/scala/org/apache/spark/util/taskListeners.scala ---
@@ -55,14 +55,16 @@ class TaskCompletionListenerException(
extends
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17917
LGTM. Merging to master and 2.2. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r115884649
--- Diff: core/src/main/scala/org/apache/spark/util/taskListeners.scala ---
@@ -55,14 +55,16 @@ class TaskCompletionListenerException(
extends
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r115884037
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -115,26 +115,33 @@ private[spark] abstract class Task[T](
case t
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17942#discussion_r115883997
--- Diff: core/src/main/scala/org/apache/spark/scheduler/Task.scala ---
@@ -115,26 +115,33 @@ private[spark] abstract class Task[T](
case t
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/17942
[SPARK-20702][Core]TaskContextImpl.markTaskCompleted should not hide the
original error
## What changes were proposed in this pull request?
This PR adds an `error` parameter
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17913
@uncleGen Thanks for doing this. As adding an intermediate node into the
logical plan may make some optimizations fail, we need to find out a better
solution. We probably will make a big
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17917#discussion_r115812262
--- Diff:
external/kafka-0-10-sql/src/main/scala/org/apache/spark/sql/kafka010/KafkaRelation.scala
---
@@ -143,4 +143,6 @@ private[kafka010] class
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r115809957
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -951,10 +966,14 @@ object CodeGenerator
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r115808650
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -353,9 +356,28 @@ abstract class SparkPlan extends QueryPlan
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r115806516
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala ---
@@ -353,9 +356,28 @@ abstract class SparkPlan extends QueryPlan
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r115806099
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
---
@@ -899,8 +902,20 @@ object CodeGenerator
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r115805782
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -20,20 +20,22 @@ package
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17087#discussion_r115805835
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/predicates.scala
---
@@ -20,20 +20,22 @@ package
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17896
@uncleGen Thanks! LGTM. Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17896
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17896#discussion_r115418094
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2457,6 +2457,19 @@ object CleanupAliases extends Rule
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17896#discussion_r115349969
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2457,6 +2457,19 @@ object CleanupAliases extends Rule
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17896#discussion_r115348902
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala
---
@@ -2457,6 +2457,19 @@ object CleanupAliases extends Rule
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17844#discussion_r115098191
--- Diff:
repl/scala-2.11/src/test/scala/org/apache/spark/repl/SingletonReplSuite.scala
---
@@ -0,0 +1,404 @@
+/*
+ * Licensed to the Apache
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17844#discussion_r115097789
--- Diff:
repl/scala-2.11/src/test/scala/org/apache/spark/repl/ReplSuite.scala ---
@@ -373,52 +190,6 @@ class ReplSuite extends SparkFunSuite
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17844#discussion_r115095994
--- Diff:
repl/scala-2.11/src/test/scala/org/apache/spark/repl/SingletonReplSuite.scala
---
@@ -0,0 +1,404 @@
+/*
+ * Licensed to the Apache
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17844#discussion_r115096431
--- Diff:
repl/scala-2.11/src/test/scala/org/apache/spark/repl/SingletonReplSuite.scala
---
@@ -0,0 +1,404 @@
+/*
+ * Licensed to the Apache
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17844#discussion_r115098412
--- Diff:
repl/scala-2.11/src/test/scala/org/apache/spark/repl/SingletonReplSuite.scala
---
@@ -0,0 +1,404 @@
+/*
+ * Licensed to the Apache
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17844#discussion_r115060517
--- Diff:
repl/scala-2.11/src/test/scala/org/apache/spark/repl/SingletonReplSuite.scala
---
@@ -0,0 +1,404 @@
+/*
+ * Licensed to the Apache
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
I suggest that you just fix them in this PR. It it has to be a large PR,
I'm okey with that.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
I don't think we need to rush. As far as I can tell, this PR breaks two
things:
- SQL Metrics on Web UI is broekn
- It doesn't display the batch queries inside a Structured Streaming query
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17863
Thanks! Merging to master, 2.2 and 2.1.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17863
@brkyvz Could you take a look? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17863
This does help. Now the time of this test is about 1 second.
http://spark-tests.appspot.com/test-details?suite_name=org.apache.spark.sql.kafka010.KafkaSourceSuite_name=deserialization+of+initial
GitHub user zsxwing opened a pull request:
https://github.com/apache/spark/pull/17863
[SPARK-20603][SS][Test]Set default number of topic partitions to 1 to
reduce the load
## What changes were proposed in this pull request?
I checked the logs of
https
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
I was not saying there is no way to fix metrics. Just asking your thoughts.
If we don't have a concrete plan, it might be a long-term regression if just
merging this PR.
I just want
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
> @zsxwing, I don't know. Sounds like we should fix the underlying problem
that there are 2 physical plans.
SQL metrics won't work without fixing it. IMO, that's more serious t
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
> That requires breaking the command into two phases, one to get a
SparkPlan and one to run it.
Yeah, but how to show metrics you get from a plan on another plan's DAG
consider
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17540
@rdblue I just tested this PR and found that I could not see any SQL
metrics on Web UI. This is pretty important for many users to analyze their
queries.
What's your plan to fix it? As far
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17346
LGTM. Thanks! Merging to master and 2.2.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17821#discussion_r114446352
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -266,7 +289,8 @@ private[deploy] class Worker
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17735
Looks pretty good except one minor issue in tests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17735#discussion_r114425160
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala ---
@@ -120,6 +141,32 @@ class StreamSuite extends StreamTest
Github user zsxwing commented on the issue:
https://github.com/apache/spark/pull/17833
In a second thought, for https://issues.apache.org/jira/browse/SPARK-20548
, we can just combine some tests into one test to reduce the SparkContexts and
REPLs.
---
If your project is set up
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/17821#discussion_r114419583
--- Diff: core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
---
@@ -266,7 +282,7 @@ private[deploy] class Worker
901 - 1000 of 6049 matches
Mail list logo