Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/14943#discussion_r77722010
--- Diff:
yarn/src/main/scala/org/apache/spark/deploy/yarn/ExecutorRunnable.scala ---
@@ -59,43 +58,46 @@ private[yarn] class ExecutorRunnable(
var rpc: YarnRPC = YarnRPC.create(conf)
var nmClient: NMClient = _
- val yarnConf: YarnConfiguration = new YarnConfiguration(conf)
- lazy val env = prepareEnvironment(container)
def run(): Unit = {
- logInfo("Starting Executor Container")
+ logDebug("Starting Executor Container")
nmClient = NMClient.createNMClient()
- nmClient.init(yarnConf)
+ nmClient.init(conf)
nmClient.start()
startContainer()
}
- def startContainer(): java.util.Map[String, ByteBuffer] = {
- logInfo("Setting up ContainerLaunchContext")
+ def launchContextDebugInfo(): String = {
+ val commands = prepareCommand()
--- End diff --
it feels a bit odd/inefficient to be preparing commands/envs that are never
used except for logging.
But I guess the other way of conditionalizing yarn allocator or passing in
param to executorRunnable would have to be called on every allocate.
Really it would be nice to only generate the command/env once since all of
them are the same. Now obviously if we change something to allow executors to
be different then it doesn't make sense.
anyway I'm ok with this.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]