Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/2126#issuecomment-53892835
org.apache.spark.executor.MesosExecutorBackend is a main method for running
spark on mesos and calls org.apache.spark.executor.Executor internally.
MesosExecutorBackend's override methods are from org.apache.mesos.Executor,
which is registered by org.apache.mesos.MesosExecutorDriver, which includes JNI
methods.
For example, see these code below,
SparkHadoopUtil.get.runAsSparkUser { () =>
MesosNativeLibrary.load()
// Create a new Executor and start it running
val runner = new MesosExecutorBackend()
new MesosExecutorDriver(runner).run()
}
MesosExecutorDriver register runner as an executor for mesos framework. And
all methods(register, launchTask and so on) are called from C++ JNI
code(src/exec/exec.cpp from mesos source code). JNI calls java methods.
My debugging message coded like this.
private[spark] class MesosExecutorBackend
extends MesosExecutor
with ExecutorBackend
with Logging {
var executor: Executor = null
var driver: ExecutorDriver = null
logDebug(UserGroupInformation.getCurrectUser) // value is my id "1001079"
...
override def registered(
driver: ExecutorDriver,
executorInfo: ExecutorInfo,
frameworkInfo: FrameworkInfo,
slaveInfo: SlaveInfo) {
logDebug(UserGroupInformation.getCurrentUser) // value is mesos' id
"hdfs"
this.driver = driver
val properties = Utils.deserialize[Array[(String,
String)]](executorInfo.getData.toByteArray)
executor = new Executor(
executorInfo.getExecutorId.getValue,
slaveInfo.getHostname,
properties)
}
override def launchTask(d: ExecutorDriver, taskInfo: TaskInfo) {
logDebug(UserGroupInformation.getCurrentUser) // value is mesos' id
"hdfs"
val taskId = taskInfo.getTaskId.getValue.toLong
if (executor == null) {
logError("Received launchTask but executor was null")
} else {
executor.launchTask(this, taskId,
taskInfo.getData.asReadOnlyByteBuffer)
}
}
Thus my result is that appropriate information about UserGroupInformation
is not handled by JNI. Actually, because executor.launchTask executor task from
spark tasks, doAs should be located inside executor.launchTask only, or mesos
must support full jvm environment like UserGroupInformation.
What do you think of it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]