Making the master yarn-cluster means that the driver is then running on
YARN not just the executor nodes. It's then independent of your application
and can only be killed via YARN commands, or if it's batch and completes.
The simplest way to tie the driver to your app is to pass in yarn-client as
master instead.
On Fri, May 6, 2016 at 2:00 PM satish saley <satishsale...@gmail.com> wrote:

> Hi Anthony,
>
> I am passing
>
>                     --master
>                     yarn-cluster
>                     --name
>                     pysparkexample
>                     --executor-memory
>                     1G
>                     --driver-memory
>                     1G
>                     --conf
>                     spark.yarn.historyServer.address=http://localhost:18080
>                     --conf
>                     spark.eventLog.enabled=true
>
>                     --verbose
>
>                     pi.py
>
>
> I am able to run the job successfully. I just want to get it killed 
> automatically whenever I kill my application.
>
>
> On Fri, May 6, 2016 at 11:58 AM, Anthony May <anthony...@gmail.com> wrote:
>
>> Greetings Satish,
>>
>> What are the arguments you're passing in?
>>
>> On Fri, 6 May 2016 at 12:50 satish saley <satishsale...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> I am submitting a spark job using SparkSubmit. When I kill my
>>> application, it does not kill the corresponding spark job. How would I kill
>>> the corresponding spark job? I know, one way is to use SparkSubmit again
>>> with appropriate options. Is there any way though which I can tell
>>> SparkSubmit at the time of job submission itself. Here is my code:
>>>
>>>
>>>    -
>>>    import org.apache.spark.deploy.SparkSubmit;
>>>    - class MyClass{
>>>    -
>>>    - public static void main(String args[]){
>>>    - //preparing args
>>>    - SparkSubmit.main(args);
>>>    - }
>>>    -
>>>    - }
>>>
>>>
>

Reply via email to