tgravescs commented on pull request #30164:
URL: https://github.com/apache/spark/pull/30164#issuecomment-718073096
the paper talks about:
we choose Magnet shue ser-vices in locations beyond the active Spark
executors, andlaunch Spark executors later via dynamic allocation basedon
locations of the chosen Magnet shue services. Thisway, instead of
choosing Magnet shue services based onSpark executor locations, we launch
Spark executors basedon locations of Magnet shue services. This optimization
ispossible because of Magnet's integration with Spark native
I know this was also talked about in the SPIP. The current implementation
seems to not do this. can the description here please be updated to state what
it does and why. Is this something to be PR'd later or just talked about as a
possibility
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]