Github user piaozhexiu commented on the pull request:
https://github.com/apache/spark/pull/5343#issuecomment-89342280
Hi @sryza , thanks for the comment. You're right that it only takes a
single command to kill AM. But in reality, sending that single command from job
server isn't always that simple.
For eg, in Genie, all it does to launch a job is spawning a shell process
and executing the spark-submit command. So what application id the job was
given isn't passed back to the launcher. Of course, you can argue that Genie
needs to be rewritten, and we're going to. But this model has worked well so
far across all the Hadoop tools such as Hive, Pig, and Sqoop. So I imagine that
many people would run into the same problem as with me.
Another point is that this behavior causes confusion to users. Since
they're used to old Hive/Pig/Sqoop behaviors, they expect that ctrl-c'ing their
commands kill their jobs. As a platform operator, I'd like to keep the behavior
of all different tools as consistent as possible so that confusion can be
avoided.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]