abhishekd0907 commented on PR #35683:
URL: https://github.com/apache/spark/pull/35683#issuecomment-1094603352
> You could explicitly scope this PR to the case when ESS use is not enabled
- namely, add the additional check for `SHUFFLE_SERVICE_ENABLED` being
disabled. Thoughts ?
@mridulm Migration is enabled only when
`spark.storage.decommission.enabled`,
`spark.storage.decommission.rddBlocks.enabled` and
`spark.storage.decommission.shuffleBlocks.enabled` are enabled based on the
following code in `CoaraseGrainedExecutorBackend` and these flags are false by
default.
```
val migrationEnabled = env.conf.get(STORAGE_DECOMMISSION_ENABLED) &&
(env.conf.get(STORAGE_DECOMMISSION_RDD_BLOCKS_ENABLED) ||
env.conf.get(STORAGE_DECOMMISSION_SHUFFLE_BLOCKS_ENABLED))
```
So even if external shuffle service is enabled and this yarn decommissioning
flow is invoked, bock migration won't be triggered unless these flags are
turned on. Executors will just wait for tasks to complete gracefully and then
exit. I think this should be okay but I can add a check for
SHUFFLE_SERVICE_ENABLED if you think that is needed?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]