Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18041
@LantaoJin I don't think you have to do trim for metrics conf coming from
`SparkConf`:
1. `SparkConf` already handles `trim` when reading from spark-defaults.conf
file.
2. If you
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18102
Looks like it will only print out the path of directory, not file, is that
on purpose?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/18102
I would doubt it will be too verbose to print out the path.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17113
Thanks @tgravescs , I will update the code soon.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17936
@viirya , this is slightly different from caching RDD. It is more like
broadcasting, the final state is that each executor will hold the whole data of
RDD2, the difference
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17936
I see. I think at least we should make this cache mechanism controllable by
flag. I'm guessing in some HPC clusters or single node cluster this problem is
not so severe.
---
If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17936
@ConeyLiu , I would suggest to add a flag cartesianRDD to specify whether
local cache should be enabled. User could choose to enable it or not. Besides,
if cache into BlockManager is failed, can
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17723
@mgummelt We have in house delegation provider for HiveServer2, multi HBase
cluster. I think this is useful in Hadoop world. So better to keep it.
---
If your project is set up for it, you can
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17963
I don't think so. Because `mergeApplicationListing` and `getAppUI` are
running in two different threads, there could be a chance where this two
methods are processing the same event file
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17963
From my understanding, with your fix there will be a chance this event log
file will be processed twice, this could be a big overhead if event log is very
large. Also this PR looks more
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17937
Besides I guess this issue only exists in yarn cluster mode, can you also
verify it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17937
@victor-wong can you please update the PR title like other PRs?
By seeing your description, seems the log is from old Spark version, in the
latest Spark there's
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17936
From my first glance, I have several questions:
1. If the parent's partition has already been cached in local blockmanager,
do we need to cache again?
2. There will be situation
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17936
Looks like there's a similar PR #17898 trying to address this issue, can
you please elaborate your difference compared to that one?
---
If your project is set up for it, you can reply
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17872
This change may be conflicted with #17723 , but I think it is easy to
resolve, CC @mgummelt .
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17872#discussion_r115200282
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala
---
@@ -22,6 +22,8 @@ import
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17866
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17872#discussion_r115186429
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala
---
@@ -48,9 +50,16 @@ private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17872#discussion_r115184099
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala
---
@@ -48,9 +50,16 @@ private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17872#discussion_r115183873
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala
---
@@ -48,9 +50,16 @@ private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17872#discussion_r115182668
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/security/HadoopFSCredentialProvider.scala
---
@@ -48,9 +50,16 @@ private
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17866#discussion_r115178712
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -429,8 +429,7 @@ private[spark] class
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17870
Why not submit PR for master branch?
From my understanding, your patch is trying to catch exception and continue
to get tokens from others FS, right?
---
If your project is set up
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17866
[SPARK-20605][Core][Yarn][Mesos] Deprecate not used AM and executor port
configuration
## What changes were proposed in this pull request?
After SPARK-10997, client mode Netty RpcEnv
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17617
@holdenk , the basic problem is that Spark uses Hadoop FileSystem's
statistics API to get bytesRead, bytesWrite per task. This statistics API is
implemented by thread local variables, it is OK
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17841
I think @srowen already clarified it very clearly, you can use it at your
own risk, but to make it public and add to the doc should be well considered.
---
If your project is set up for it, you
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17841
I remember these REST APIs are not public APIs, it is only used for
SparkSubmit internally. shall we add the docs about them?
---
If your project is set up for it, you can reply to this email
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17794
The change LGTM, I think test PRs are always welcome. CC @srowen to see
committer's comment.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17782
@victor-wong are you going to submit PR for branch-1.6, does this issue
exist in master branch? Also would you please elaborate more about this issue,
thanks.
---
If your project is set up
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17794
Would you please update the title to add `[core]` like other PRs.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17794#discussion_r114309837
--- Diff: core/src/test/scala/org/apache/spark/storage/BlockIdSuite.scala
---
@@ -101,6 +129,30 @@ class BlockIdSuite extends SparkFunSuite
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17794#discussion_r114309758
--- Diff: core/src/test/scala/org/apache/spark/storage/BlockIdSuite.scala
---
@@ -19,6 +19,8 @@ package org.apache.spark.storage
import
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17824
Thanks @vanzin ! `SparkStatusTracker` depends on `JobProgressListener`
which was already deprecated, will you remove this `JobProgressListener` and
rewrite `SparkStatusTrack`?
---
If your
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17824
@vanzin are we going to remove these listeners in future, or just keep them
as deprecated? Some projects like Zeppelin explicitly depends on these
listeners not only for code simplicity, but also
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
@vanzin , since branch 2.0 doesn't have this feature in UI
(https://issues.apache.org/jira/browse/SPARK-11272), so I don't think it is
required to fix in branch-2.0.
---
If your project is set
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
Thanks @vanzin , let me submit a patch for branch 2.0.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
I assume only event log download will be effected with #17582 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
@ajbozarth , I checked the UI related to attemptId, seems fine without
issue. Can you please point out in which code potentially has the regression?
Thanks!
---
If your project is set up
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
@ajbozarth OK, I will verify it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
@vanzin , can you please review this PR, thanks! The download link is
broken after this change #17582 . Now it will check SparkUI with given appid
and attemptId. Previous way of setting attemptId
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
This is due to my changes in #17582 , with this change, download API will
verify with correct ACL, so if attemptId is not found, then `withSparkUI` will
be failed to get correct SparkUI to do
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
@ajbozarth the key point is that several spark applications doesn't have
attempt id. It's not related to one attempt or two, for example:
```
{
&qu
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17795
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17795
[SPARK-20517][UI] Fix broken history UI download link
## What changes were proposed in this pull request?
The download link in history server UI is concatenated
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17700
@squito , do you have any further comment? Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17755
Thanks @vanzin .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user jerryshao closed the pull request at:
https://github.com/apache/spark/pull/17755
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17755
CC @vanzin , this backport can be merged to branch 2.0 cleanly.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17755
[SPARK-20239][CORE][2.1-backport] Improve HistoryServer's ACL mechanism
Current SHS (Spark History Server) has two different ACLs:
* ACL of base URL, it is controlled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
OK, let me try it, thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
What about branch 2.0, do we also need to backport to it @vanzin ?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
OK, thanks @tgravescs .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
Thanks @tgravescs for your comments. Do you think it is a good idea to read
out ACLs when `mergeApplicationListing ` in
[here](https://github.com/apache/spark/blob/master/core/src/main/scala/org
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17700#discussion_r112879836
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -114,10 +114,16 @@ private[spark] object ExecutorsPage {
val
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17625
@jsoltren , a quick look at your current implementation, looks like you
only track the Netty memory usage in `NettyBlockTransferService`, but in Spark
there're some other places which will create
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112603493
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112596999
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112595717
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -405,9 +405,7 @@ class SparkContext(config: SparkConf) extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112595579
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112594234
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
Just update the description, please review again @vanzin , thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17700
Thanks @squito for clarification, sorry I misunderstood it.
Regarding this new `memoryMetrics`, will all the memory related metrics be
shown here, like what you mentioned in the JIRA
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112452449
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -405,9 +405,7 @@ class SparkContext(config: SparkConf) extends Logging
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112448477
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -50,22 +49,22 @@ import org.apache.spark.util.{JsonProtocol
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17638#discussion_r112448150
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -405,9 +405,7 @@ class SparkContext(config: SparkConf) extends Logging
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/17700
[SPARK-20391][Core] Rename memory related fields in ExecutorSummay
## What changes were proposed in this pull request?
This is a follow-up of #14617 to make the name of memory related
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17582#discussion_r112358891
--- Diff:
core/src/main/scala/org/apache/spark/status/api/v1/ApiRootResource.scala ---
@@ -184,14 +184,27 @@ private[v1] class ApiRootResource extends
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17620
I'm still not sure what issue you met during recovery and what will be
happened if the issue is occurred?
Looks from the fix you provided, what you mainly did is to shutdown rpcEnv,
what
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17658#discussion_r112157871
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/SparkListenerBus.scala ---
@@ -71,7 +71,6 @@ private[spark] trait SparkListenerBus
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17658#discussion_r112154490
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -243,18 +243,19 @@ private[history] class FsHistoryProvider
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112144166
--- Diff:
core/src/main/scala/org/apache/spark/deploy/security/ConfigurableCredentialManager.scala
---
@@ -41,15 +41,17 @@ import
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112142810
--- Diff: core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala ---
@@ -564,12 +566,22 @@ object SparkSubmit extends CommandLineUtils
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
Thanks @tgravescs for your reply.
> on the history server I would expect spark.acls.enable=false and
spark.history.ui.acls.enable=true, I can see where that could be confusing,
perh
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112096515
--- Diff:
resource-managers/yarn/src/test/resources/META-INF/services/org.apache.spark.deploy.yarn.security.ServiceCredentialProvider
---
@@ -1 +0,0
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17665#discussion_r112095984
--- Diff:
resource-managers/yarn/src/test/resources/META-INF/services/org.apache.spark.deploy.yarn.security.ServiceCredentialProvider
---
@@ -1 +0,0
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r112095408
--- Diff:
core/src/test/scala/org/apache/spark/deploy/SparkHadoopUtilSuite.scala ---
@@ -0,0 +1,83 @@
+/*
+ * Licensed to the Apache Software
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
@tgravescs , with the changes of history UI, REST API and web UI are now
mixed. The base URL to list all the apps is through REST API.
The key problem here is that in History Server we
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17588
Ping @srowen again.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r111860131
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/FsHistoryProviderSuite.scala
---
@@ -143,12 +156,26 @@ class FsHistoryProviderSuite
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/14617
@squito , by revising this code, I found there're some places which are
misleading and could be improved:
* All the memory usage referred here about on-heap memory and off-heap
memory
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17582
@tgravescs @vanzin do you have any comment on this JIRA?
A compromise is that any user could see all the app list but detailed
information is still controlled by per app ACLs. But we
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111521465
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17580
It is just Java8 lambda function, nothing related to Scala...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17580
You mean this
[line](https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaKafkaWordCount.java#L76)?
It's because our KafkInputDStream
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17580
What's the meaning of "some of the Scala"?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project doe
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17495
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user jerryshao commented on the issue:
https://github.com/apache/spark/pull/17625
@jsoltren thanks to bring up this very old PR.
By looking at the UI you pasted here, I'm wondering what is the usage of
`Completed Stages` here, what's difference here compared to `Stages
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r111303973
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17342#discussion_r111303746
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala ---
@@ -148,6 +149,8 @@ private[sql] class SharedState(val sparkContext
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111300760
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111299488
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111298162
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r111292239
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/17495#discussion_r111287664
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/FsHistoryProvider.scala ---
@@ -320,14 +321,35 @@ private[history] class FsHistoryProvider
1201 - 1300 of 2761 matches
Mail list logo