GitHub user sandeep-katta opened a pull request:
https://github.com/apache/spark/pull/21565
wrong Idle Timeout value is used in case of the cacheBlock.
It is corrected as per the configuration.
## What changes were proposed in this pull request?
IdleTimeout info used to print in the logs is taken based on the
cacheBlock. If it is cacheBlock then cachedExecutorIdleTimeoutS is considered
else executorIdleTimeoutS
## How was this patch tested?
Manual Test
spark-sql> cache table sample;
2018-05-15 14:44:02 INFO DAGScheduler:54 - Submitting 3 missing tasks from
ShuffleMapStage 0 (MapPartitionsRDD[8] at processCmd at CliDriver.java:376)
(first 15 tasks are for partitions Vector(0, 1, 2))
2018-05-15 14:44:02 INFO YarnScheduler:54 - Adding task set 0.0 with 3
tasks
2018-05-15 14:44:03 INFO ExecutorAllocationManager:54 - Requesting 1 new
executor because tasks are backlogged (new desired total will be 1)
...
...
2018-05-15 14:46:10 INFO YarnClientSchedulerBackend:54 - Actual list of
executor(s) to be killed is 1
2018-05-15 14:46:10 INFO **ExecutorAllocationManager:54 - Removing
executor 1 because it has been idle for 120 seconds (new desired total will be
0)**
2018-05-15 14:46:11 INFO YarnSchedulerBackend$YarnDriverEndpoint:54 -
Disabling executor 1.
2018-05-15 14:46:11 INFO DAGScheduler:54 - Executor lost: 1 (epoch 1)
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sandeep-katta/spark loginfoBug
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/21565.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #21565
----
commit 30fcef650ee2bd2873bf402448652acba055f989
Author: sandeep-katta <sandeep.katta2007@...>
Date: 2018-06-14T09:56:59Z
wrong Idle Timeout value is used in case of the cacheBlock.
It is corrected as per the configuration.
----
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]