vanzin commented on a change in pull request #23706: [SPARK-26790][CORE] Change
approach for retrieving executor logs and attributes: self-retrieve
URL: https://github.com/apache/spark/pull/23706#discussion_r256528463
##########
File path:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
##########
@@ -285,25 +285,29 @@ private[spark] object CoarseGrainedExecutorBackend
extends Logging {
printUsageAndExit()
}
- run(driverUrl, executorId, hostname, cores, appId, workerUrl,
userClassPath)
+ val createFn: (RpcEnv, SparkEnv) => CoarseGrainedExecutorBackend = { case
(rpcEnv, env) =>
+ new CoarseGrainedExecutorBackend(rpcEnv, driverUrl, executorId,
hostname, cores,
+ userClassPath, env)
+ }
+ run(driverUrl, executorId, hostname, cores, appId, workerUrl,
userClassPath, createFn)
System.exit(0)
}
private def printUsageAndExit() = {
// scalastyle:off println
System.err.println(
"""
- |Usage: CoarseGrainedExecutorBackend [options]
- |
- | Options are:
- | --driver-url <driverUrl>
- | --executor-id <executorId>
- | --hostname <hostname>
- | --cores <cores>
- | --app-id <appid>
- | --worker-url <workerUrl>
- | --user-class-path <url>
- |""".stripMargin)
+ |Usage: CoarseGrainedExecutorBackend [options]
Review comment:
Spark uses the previous indentation style in many places, so I guess that
answers your question. I wouldn't exactly call this "correcting the
indentation"...
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]