Re: Spark job ends abruptly during setup without error message

2015-02-05 Thread Arun Luthra
I'm submitting this on a cluster, with my usual setting of, export
YARN_CONF_DIR=/etc/hadoop/conf

It is working again after a small change to the code so I will see if I can
reproduce the error (later today).

On Thu, Feb 5, 2015 at 9:17 AM, Arush Kharbanda 
wrote:

> Are you submitting the job from your local machine or on the driver
> machine.?
>
> Have you set YARN_CONF_DIR.
>
> On Thu, Feb 5, 2015 at 10:43 PM, Arun Luthra 
> wrote:
>
>> While a spark-submit job is setting up, the yarnAppState goes into
>> Running mode, then I get a flurry of typical looking INFO-level messages
>> such as
>>
>> INFO BlockManagerMasterActor: ...
>> INFO YarnClientSchedulerBackend: Registered executor:  ...
>>
>> Then, spark-submit quits without any error message and I'm back at the
>> command line. What could be causing this?
>>
>> Arun
>>
>
>
>
> --
>
> [image: Sigmoid Analytics] 
>
> *Arush Kharbanda* || Technical Teamlead
>
> ar...@sigmoidanalytics.com || www.sigmoidanalytics.com
>


Re: Spark job ends abruptly during setup without error message

2015-02-05 Thread Arush Kharbanda
Are you submitting the job from your local machine or on the driver
machine.?

Have you set YARN_CONF_DIR.

On Thu, Feb 5, 2015 at 10:43 PM, Arun Luthra  wrote:

> While a spark-submit job is setting up, the yarnAppState goes into Running
> mode, then I get a flurry of typical looking INFO-level messages such as
>
> INFO BlockManagerMasterActor: ...
> INFO YarnClientSchedulerBackend: Registered executor:  ...
>
> Then, spark-submit quits without any error message and I'm back at the
> command line. What could be causing this?
>
> Arun
>



-- 

[image: Sigmoid Analytics] 

*Arush Kharbanda* || Technical Teamlead

ar...@sigmoidanalytics.com || www.sigmoidanalytics.com


Spark job ends abruptly during setup without error message

2015-02-05 Thread Arun Luthra
While a spark-submit job is setting up, the yarnAppState goes into Running
mode, then I get a flurry of typical looking INFO-level messages such as

INFO BlockManagerMasterActor: ...
INFO YarnClientSchedulerBackend: Registered executor:  ...

Then, spark-submit quits without any error message and I'm back at the
command line. What could be causing this?

Arun