Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20183#discussion_r161149468
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -206,36 +206,50 @@ private[spark] class TorrentBroadcast[T
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20183#discussion_r161147892
--- Diff:
core/src/main/scala/org/apache/spark/broadcast/TorrentBroadcast.scala ---
@@ -206,36 +206,50 @@ private[spark] class TorrentBroadcast[T
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20242
LGTM, @dongjoon-hyun is the current changes include all the lint issues, or
you still have further changes?
---
-
To
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20184
@liutang123 , can you please tell us how to produce your issue easily?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20236
@squito thanks for the fix. I also don't have PRs to verify the changes,
but I think catching exception should be e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
Let me merge to master and branch 2.3. Thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
LGTM. @merlintang please fix the PR title, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
@steveloughran @vanzin please help to review again.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19885#discussion_r160617532
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
---
@@ -357,6 +357,41 @@ class ClientSuite extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19885#discussion_r160617569
--- Diff:
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/ClientSuite.scala
---
@@ -357,6 +357,41 @@ class ClientSuite extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20179#discussion_r160612163
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,11 +196,24 @@ private[spark] class
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20179#discussion_r160351383
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,11 +196,24 @@ private[spark] class
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20179#discussion_r160347716
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,11 +196,24 @@ private[spark] class
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20179#discussion_r160347387
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,11 +196,24 @@ private[spark] class
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
Yes, I think so. Based on the current MetricsSystem, it is hard to avoid
`MetricsRegistry`, whether explicitly or implicitly (unless we
refactor/abstract this part a lot). Also true if user want
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
Hi @CodingCat , thanks a lot for your explanation. IIUC, from the code you
mentioned above, we still need to pass `MetricRegistry` to `Reporter`,
otherwise how would a reporter report the
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20078
Originally in Spark dynamic allocation, "spark.executor.instances" and
dynamic allocation conf cannot be co-existed, if "spark.executor.instances" is
set, dynamic allocation
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20144
@zsxwing , would you please take a review, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/20144
[SPARK-21475][CORE][2nd attempt] Change to use NIO's Files API for external
shuffle service
## What changes were proposed in this pull request?
This PR is the second attempt of #
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20119
OK, I will do it.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20078
I'm not against the fix. My concern is that we've shifted to structured
streaming, also this feature (streaming dynamic allocation) is seldom
used/tested, this might not be the
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20119
@zsxwing maybe we only need to fix above two points related to external
shuffle service, what do you think?
---
-
To
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20078
Sorry to chime in. This feature (streaming dynamic allocation) is obsolete
and has bugs, users seldom enabled this feature, does it still worth to fix
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
@CodingCat , IIUC the way you mentioned will also expose Codahale
`Reporter` to user, can you please explain more? Thanks
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20119#discussion_r159372233
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ShuffleIndexInformation.java
---
@@ -39,7 +39,7 @@ public
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20119#discussion_r159371080
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -198,7 +196,7 @@ private[spark] class
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20119#discussion_r159370876
--- Diff:
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/OneForOneBlockFetcher.java
---
@@ -165,7 +165,7 @@ private void
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/20119#discussion_r159364034
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -94,9 +93,9 @@ public ByteBuffer
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
Sorry for late response, I was off last two weeks. Currently I don't have a
better solution for this, @CodingCat let me think about your suggestion, thanks
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/20119
Sorry I haven't checked the details, let me take a look at it. The changes
I made was trying to fix memory issue for shuffle (especially external shuffle
service), this issue was occurred i
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
I see. Thanks for the explanation @steveloughran . My concern is that
current changes will affect all the filesystems, but we only saw this issue in
wasb. So limiting authority comparison to only
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
>User info isn't picked up from the URL, it's taken off your Kerberos
credentials. If you are running HDFS unkerberized, then UGI takes it from the
environment variable HAD
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r155125699
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/Client.scala
---
@@ -0,0 +1,234
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
I still have a question about it, URIs for HDFS like
`hdfs://us...@nn1.com:8020` and `hdfs://us...@nn1.com:8020` , do we honor
userInfo for HDFS filesystems, are they two HDFS clusters, or just
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r154875878
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/DriverConfigurationStepsOrchestrator.scala
---
@@ -0,0
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r154870951
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2744,6 +2744,25 @@ private[spark] object Utils extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r154874378
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/Client.scala
---
@@ -0,0 +1,234
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r154872371
--- Diff: docs/running-on-yarn.md ---
@@ -234,18 +234,11 @@ To use a custom metrics.properties for the
application master and executors, upd
The
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r154871648
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -2744,6 +2744,25 @@ private[spark] object Utils extends Logging
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
Is this assumption based on the implementation of Hadoop `FileSystem`? I
was thinking that wasb is an exception, for other we still keep the original
code.
@steveloughran would you
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
@vanzin please help to review, thanks!
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19885#discussion_r154822603
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -1428,6 +1428,12 @@ private object Client extends
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19885
@merlintang would you please add the problem to your PR description,
currently it is a WASB problem in which userInfo is honored to differentiate
filesystems. Please add the scenario to the
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19840
I'm a little concerned about such changes, this may be misconfigured to
introduce the discrepancy between driver python and executor python, at least
we should honor this configur
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19840
Oh, I see. You're running in client mode. So this one `--conf
spark.yarn.appMasterEnv.PYSPARK_PYTHON=py3.zip/py3/bin/python` is useless.
So I guess the behavior is expected. Be
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19840
I think in YARN we have several different ways to set `PYSPARK_PYTHON`, I
guess your issue is that which one should take priority?
Can you please:
1. Define a consistent ordering
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19856
>I think the log can't reflect the behavior of consumer connection,because
consumer.create doesn't do any connect,it only construct a
ZookeeperConsumerConnector instance
Th
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19856
Actually there's no issue here, IMHO I think your understanding of this log
is slightly different from the original pu
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19856
I guess the original purpose of such log is to reflect the behavior of
consumer connection. It is not super necessary to do such trivial change. Also
`ReliableKafkaReceiver` is not recommended
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19631
LGTM.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19812
Does this failure ". For some reason, all of the 3 executors failed. "
happened during task running or before task submission? Besides, if you're
running on yarn, yarn will bring n
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19834
LGTM.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r153678912
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19812
Hi @liutang123 would you mind explaining us the issue you met and how to
reproduce it? Currently we don't know what actual issue it is and how to
evaluate your ch
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r153408574
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r153410482
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/submit/steps/BaseDriverConfigurationStep.scala
---
@@ -0,0 +1,162
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r153407637
--- Diff:
resource-managers/kubernetes/core/src/main/scala/org/apache/spark/deploy/k8s/Config.scala
---
@@ -0,0 +1,160 @@
+/*
+ * Licensed to
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r153407820
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -702,6 +715,19 @@ object SparkSubmit extends CommandLineUtils with
Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19717#discussion_r153408859
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -590,6 +600,11 @@ private[deploy] class SparkSubmitArguments(args
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19717
I think we'd better to honor newly added
`org.apache.spark.deploy.SparkApplication` to implement k8s client, like #
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
@rxin , thanks for your comment. The key motivation of this PR is to expose
the metrics Sink/Source interface for third-party plugins, so that we don't
need to maintain every different
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19788
@yucai I'm thinking of the necessity to add this new configuration
`spark.shuffle.continuousFetch` like you mentioned above. This PR you proposed
is actually a superset of previous way,
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/11994#discussion_r153110760
--- Diff: core/src/main/scala/org/apache/spark/metrics/MetricsSystem.scala
---
@@ -195,18 +196,26 @@ private[spark] class MetricsSystem private
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
@felixcheung thanks for your reviewing. I think there's no next step,
current changes should be enough for user to externalize customized metrics
source and
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19788#discussion_r153089584
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,12 +196,14 @@ private[spark] class
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19775
Do we have to put this in Spark, is it a necessary part of k8s? I think if
we pull in that PR(https://github.com/apache/spark/pull/11994), then this can
be stayed out of Spark as a package. Even
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19802
Can you please explain more, and how to reproduce this issue? Spark's RPC
is not designed for version compatible.
---
---
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19788#discussion_r152891920
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,12 +196,14 @@ private[spark] class
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19788#discussion_r152891792
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -812,10 +812,13 @@ private[spark] object MapOutputTracker extends
Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19788#discussion_r152891172
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,12 +196,14 @@ private[spark] class
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19788#discussion_r152891438
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/IndexShuffleBlockResolver.scala ---
@@ -196,12 +196,14 @@ private[spark] class
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19788
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19788
Sure, I will do it tomorrow.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19788
@yucai would you mind adding more explanations to your PR description?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19631
Did another round of review, LGTM overall. @tgravescs do you any comment?
---
-
To unsubscribe, e-mail: reviews-unsubscr
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r151320052
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -216,7 +216,9 @@ private[spark] object
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19633#discussion_r151308496
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -424,11 +424,19 @@ case class FileSourceScanExec
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19643#discussion_r151307924
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -1838,12 +1852,21 @@ class SparkContext(config: SparkConf) extends
Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19741#discussion_r151305271
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -268,8 +268,13 @@ private[spark
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19741
From my understanding, the above exception seems no harm to the Spark
application, just running into some threading corner case during stop, am I
right
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r151015745
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -412,8 +412,6 @@ class SparkContext(config: SparkConf) extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r151017494
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -216,7 +216,9 @@ private[spark] object
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r151018454
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/Client.scala
---
@@ -745,15 +739,20 @@ private[spark] class Client
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r150751268
--- Diff: core/src/main/scala/org/apache/spark/SecurityManager.scala ---
@@ -551,13 +553,10 @@ private[spark] class SecurityManager(
private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r150751761
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -92,6 +92,11 @@ object SparkSubmit extends CommandLineUtils with Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19631#discussion_r150752055
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -216,7 +216,9 @@ private[spark] object
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19633#discussion_r150746876
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.scala
---
@@ -424,11 +424,19 @@ case class FileSourceScanExec
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19711#discussion_r150712289
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala ---
@@ -113,7 +116,7 @@ class SQLListener(conf: SparkConf) extends
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19735
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
Jenkins, retest this please.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19735
ok to test.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/11994
Sure, let me update the code.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19693
Whether shuffle write time should include the file open/close time is
debatable, also we don't know whether the actual open action is lazy or not
(depends on OS). But one downside of this c
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19711#discussion_r150172409
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/ui/SQLListener.scala ---
@@ -113,7 +116,7 @@ class SQLListener(conf: SparkConf) extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19661#discussion_r150171482
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -108,6 +108,27 @@ class KryoSerializerSuite extends
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19649#discussion_r149845572
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/catalog/events.scala
---
@@ -62,6 +62,16 @@ case class DropDatabasePreEvent(database
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/19661#discussion_r149619662
--- Diff:
core/src/main/scala/org/apache/spark/serializer/KryoSerializer.scala ---
@@ -178,10 +178,40 @@ class KryoSerializer(conf: SparkConf
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19649
One question as mentioned above also, do we need to track partition related
events? @cloud-fan @hvanhovell @gatorsmile
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/19688
Please specify the purpose of this change in PR description. If it belongs
to #19663 , why don't you change it
501 - 600 of 2785 matches
Mail list logo