Github user suyanNone commented on the issue:
https://github.com/apache/spark/pull/17936
So careless to notice UnsafeCartesianRDD's
ExternalAppendOnlyUnsafeRowArray, that nice, I am not read all discussion
here...the solution unify with unsafeCartesionRDD already have
Github user suyanNone commented on the issue:
https://github.com/apache/spark/pull/17936
May create a MemoryAndDiskArray like ExternalAppendOnlyMap?
MemoryAndDiskArray, not also use here and also groupByKey? and it memory can
controller by MemoryManager
Github user suyanNone commented on the issue:
https://github.com/apache/spark/pull/12576
@tdas agree, `isCheckpointed` should be final, in current code,
`isCheckpointed` exposed as public is for testing?
---
If your project is set up for it, you can reply to this email and have
Github user suyanNone commented on the issue:
https://github.com/apache/spark/pull/14765
jenkins retest.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/14765
[SPARK-15815] Kã
## What changes were proposed in this pull request?
(Please fill in changes proposed in this fix)
## How was this patch tested?
(Please explain
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8923#issuecomment-218682081
already merge into https://github.com/apache/spark/pull/12655, mark this
closed
---
If your project is set up for it, you can reply to this email and have your
reply
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/8923
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/12570
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12570#issuecomment-218412962
@ajbozarth oh...I not familiar with standalone... I have no idea about
there had a class named `LogPage`, nice for you to tell me about that...
---
If your
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12735#issuecomment-218389524
@tgravescs, yes, the executor will failed fast...
vaguely remember that there have a application failed caused by shuffle
server unhealthy dir, I don't have
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/12735
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12655#issuecomment-215329471
as I know, duplicate stage occurs:
stage 2 and stage 3 all depends on stage 1
stage 4 depends on stage 2 and stage 3
So, if we get
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12735#issuecomment-215298717
To be honest, I not walk through all Yarn shuffle server process, I just
fix our user reported problem, why can't connect with shuffle server due to
create lev
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12735#issuecomment-215297811
for why will have multi-meta files, assume we found one, but the disk will
be a read-only filesystem, still use that ? or choose another healthy dir to
create new
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12735#issuecomment-215294953
```
registeredExecutorFile =
findRegisteredExecutorFile(conf.getTrimmedStrings("yarn.nodemanager.local-dirs"));
```
we got t
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/12735
[SPARK-14957][Spark] Adopt healthy dir to store executor meta
## What changes were proposed in this pull request?
Adopt a healthy dir to store executor meta.
## How was this
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8923#issuecomment-214655571
Revert set to Stack, and add test case.
Revert set to Stack, for we should build map stage from bottom to
up(Stack), not a random(Set structure).
---
If your
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8927#issuecomment-213363612
@squito @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/8927#discussion_r60716314
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -1083,8 +1085,6 @@ class DAGSchedulerSuite extends SparkFunSuite
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/8927#discussion_r60715521
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1416,6 +1449,7 @@ class DAGScheduler
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12524#issuecomment-213351412
can you refer this, and have a look,
https://github.com/apache/spark/pull/8927
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user suyanNone reopened a pull request:
https://github.com/apache/spark/pull/12576
[Spark][Graphx] Fix Graph vertexRDD/EdgeRDD checkpoint results
ClassCastException
## What changes were proposed in this pull request?
The PR fixed compute chain from CheckpointRDD
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/12576
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/12576#issuecomment-212938601
mistake to open, close first
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/12576
[Spark][Graphx] Fix Graph vertexRDD/EdgeRDD checkpoint results
ClassCastException
## What changes were proposed in this pull request?
The PR fixed compute chain from CheckpointRDD
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/12570
[SPARK-14750][Spark][UI] Support spark on yarn user refer finished
application log in historyServer
## What changes were proposed in this pull request?
For spark on yarn, make user can
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8927#issuecomment-212322683
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user suyanNone reopened a pull request:
https://github.com/apache/spark/pull/8927
[SPARK-10796][CORE]Resubmit stage while lost task in Zombie TaskSets
We meet that problem in Spark 1.3.0, and I also reproduce on the latest
version.
desc:
1. We know a running
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/8927
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/9992#discussion_r46235437
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala ---
@@ -228,6 +228,7 @@ private[spark] class ApplicationMaster
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/9992#issuecomment-160623236
@lianhuiwang Hi, can you look for that?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/10031#issuecomment-160623021
@lianhuiwang
I prefer call sc.requestTotalExecutor(0), because we can't foresee user's
behave after sc.stop
and I already refine in my previous mo
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/10031#issuecomment-160507811
no... actually, it is not good ideaExecutor may be killed by Yarn due
to some reason...or just akka disconnected...
and I hope you can close this, I will
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/9992
[SPARK-12009][Yarn]Avoid to re-allocating yarn container while driver want
to stop all Executors
You can merge this pull request into a Git repository by running:
$ git pull https
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-159140817
@andrewor14 I have the impression that cogroup will hold tow RDD's
iterator,
If the first iterator is `MEMORY_ONLY`, and unroll failed, so the iterator
is
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/9691
Unify dependencies entry
a small change
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/suyanNone/spark unify-getDependency
Alternatively you
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-155985114
eh...
the earlier version is add a
`blockManager.memoryStore.releasePendingUnrollMemoryForThisThread()` in [this
line](https://github.com/apache/spark/blob
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-155758546
@andrewor14 , I had not make my point clearly.
This issues created because of `memory_disk_level` block not released
`unrollMemory` after that block had been put
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-155260117
= =, I will find time to update today...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/6644
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8923#issuecomment-153248553
@markhamstra @squito
I need re-construct and re-run the test case to confirm that problem...
R1 --|
| --->
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/6263#issuecomment-153225204
yean, make it configurable looks good
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/6263#issuecomment-151010358
Hi, @archit279thakur would you mind add the logic about adding a time
expire to show lost-Executor log?
---
If your project is set up for it, you can reply to this
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8923#issuecomment-148297636
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-148025758
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-147999540
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8923#issuecomment-147999008
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8923#issuecomment-147964293
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-147963975
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-147957284
@andrewor14
For we have three methods to release unroll memory for three situation
1. Unroll success. We expect to cache this block in `tryToPut`. We do not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8923#issuecomment-143923739
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8927#issuecomment-14378
jenkins retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8927#issuecomment-143698080
Reproduce that, so re-open that
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user suyanNone reopened a pull request:
https://github.com/apache/spark/pull/8927
[SPARK-10796][CORE]Resubmit stage while lost task in Zombie TaskSets
We meet that problem in Spark 1.3.0, and I also check the latest Spark
code, and I think that problem still exist.
desc
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8927#issuecomment-143632342
I will run a test job on the latest code, to confirm that problem exist or
not...
---
If your project is set up for it, you can reply to this email and have your
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/8927
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/8927
[SPARK-10796][CORE]Resubmit stage while lost task in Zombie TaskSets
We meet that problem in Spark 1.3.0, and I also check the latest Spark
code, and I think that problem still exist.
1. We
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/8923
[SPARK][SPARK-10842]Eliminate create duplicate stage while generate job dag
When we traverse RDD, to generate Stage DAG, Spark will skip judge the
stage whether it add into shuffleIdToStage in
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-138480264
sorry too late to see that, I will update it today
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8550#issuecomment-136767971
For Spark-2666, this patch is aim to cancel all stage tasks as long as a
FetchFailedException was thrown, and I think is no related to this patch,
right? because this
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8550#issuecomment-136766398
@squito I close that patch, because there have some errors in test.
call `taskScheduler.cancelTasks(stage.id, true)` in MarkStageAsFinished, It
will mark all the
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8550#issuecomment-136762126
@squito yean, I will close that
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/8550
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-136661103
@srowen Ok, I check the current code, it change to memory threshold per
task instead per thread. so that problem is still exist...
"
In the pre
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4887#issuecomment-136657671
@srowen @nitin2goyal I not sure it is need because there is a big change
for memory threshold per thread.
In the previous situation, it is aim to resolve
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/8550
[SPARK][SPARK-10370]Cancel all running attempts while that stage marked as
finished
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/7699#issuecomment-136639959
@squito Spark-10370 , eh...I think already resolved it few month ago in my
local env... but it's based in spark 1.3.0...
do u already working for it ?
-
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8237#issuecomment-132427763
@tdas Ok, as you already discussed before, so let's stay the same.
---
If your project is set up for it, you can reply to this email and have your
reply appe
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/8237
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/8237#issuecomment-132054268
@tdas, yean, I agree with it will be confused in semantics, and for a batch
streaming system, it will not blocked the next batch as long as it will
finished quickly
GitHub user suyanNone reopened a pull request:
https://github.com/apache/spark/pull/8237
KafKaDirectDstream should filter empty partition task or rdd
To avoid submit stages and tasks for 0 events batch.
You can merge this pull request into a Git repository by running:
$ git
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/8237
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user suyanNone opened a pull request:
https://github.com/apache/spark/pull/8237
KafKaDirectDstream should filter empty partition task or rdd
To avoid submit stages and tasks for 0 events batch.
You can merge this pull request into a Git repository by running:
$ git pull
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/6644#issuecomment-128218507
@andrewor14 @harishreedharan @squito I miss the andrewor14 comments about
"long running app" again... as harishreedharan says, "time limit exp
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/7699#issuecomment-125477126
... sorry for not update instantly... I am working busy off-line. it's nice
for you to refine the code for #4055.
---
If your project is set up for it, you can
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/6644#issuecomment-125468342
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/6644#discussion_r35617005
--- Diff: core/src/main/scala/org/apache/spark/status/api/v1/api.scala ---
@@ -60,7 +60,8 @@ class ExecutorSummary private[spark](
val
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r35177284
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -739,6 +742,88 @@ class DAGSchedulerSuite
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-123142054
@squito @markhamstra hmm, agree to only tracker partitionId, which make
more simple. It's great to see that patch will to be close, Great thanks!
---
If
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r35064694
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1193,8 +1193,10 @@ class DAGScheduler(
// TODO: This will
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-118758477
@squito about stage.pendingTask and shuffleMapStage.isAvaiable
for use `isAvailable` to instead of `stage.pendingTask's`, may need more
careful to do with
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-118708012
@squito oh...I had skipped it...
1) Task attempt now is described in `TaskInfo` in Spark `TaskSetManager`.
`TaskSetManager` is responsible for completing task
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r33861293
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1193,8 +1193,10 @@ class DAGScheduler(
// TODO: This will
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r33861063
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -305,7 +305,7 @@ private[spark] class MapOutputTrackerMaster(conf:
SparkConf
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r33860928
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1193,8 +1193,10 @@ class DAGScheduler(
// TODO: This will
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r33858523
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -305,7 +305,7 @@ private[spark] class MapOutputTrackerMaster(conf:
SparkConf
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-118257935
@squito you are right, taskset is set to zombie for not scheduling the rest
task.
you Testcase is good, and I just modify 1 place.
```
runEvent
Github user suyanNone closed the pull request at:
https://github.com/apache/spark/pull/6586
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/6586#issuecomment-117421446
@srowen I am in vacation in the past 10 days, sorry too late to see that.
eh... ok, I will close this patch.
---
If your project is set up for it, you can reply
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-117419882
@squito I am in vacation in the past 10 days, sorry too late to see that. I
will read your comments more carefully tomorrow, and it's grateful to you for
revi
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/6644#issuecomment-113865096
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-113477240
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-113438222
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r32811940
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1193,8 +1193,10 @@ class DAGScheduler(
// TODO: This will
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r32811839
--- Diff: core/src/main/scala/org/apache/spark/MapOutputTracker.scala ---
@@ -305,7 +305,7 @@ private[spark] class MapOutputTrackerMaster(conf:
SparkConf
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4887#discussion_r32811415
--- Diff: core/src/main/scala/org/apache/spark/storage/MemoryStore.scala ---
@@ -295,9 +296,9 @@ private[spark] class MemoryStore(blockManager
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-113414080
@squito, can you help me to review the test again. 3ks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r32800016
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -598,6 +598,49 @@ class DAGSchedulerSuite
Github user suyanNone commented on a diff in the pull request:
https://github.com/apache/spark/pull/4055#discussion_r32799824
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/DAGSchedulerSuite.scala ---
@@ -598,6 +598,49 @@ class DAGSchedulerSuite
Github user suyanNone commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-113356815
@andrewor14 The rest failed, because I change the
`DagScheduler.executorLost` logical
DAGSCheduler.scala:
Origin version: Change mapOutPutTracker epoch
1 - 100 of 180 matches
Mail list logo