Turns out that it had the spark assembly in the target/dependency dir so 
resource localization failed because spark was writing both the assembly from 
the dependency dir along with the main resource localization of the assembly as 
part of the main code line. Yarn doesn't like it if the file is overwritten in 
hdfs after it's been registered as a local resource. Node manager logs are your 
friend!

Just sharing in case other folks run into the same problem.

Thanks,
Ron

Sent from my iPhone

> On Jul 25, 2014, at 9:36 AM, Ron Gonzalez <zlgonza...@yahoo.com> wrote:
> 
> Folks,
>   I've been able to submit simple jobs to yarn thus far. However, when I did 
> something more complicated that added 194 dependency jars using --addJars, 
> the job fails in YARN with no logs. What ends up happening is that no 
> container logs get created (app master or executor). If I add just a couple 
> of dependencies, it works, so this is clearly a case of too many dependencies 
> passed into the invocation.
> 
>   Not sure if this means that no container was created at all, but bottom 
> line is that I get no logs that can help me determine what's wrong. Because 
> of the large number of jars, I figured it might have been a permgen issue so 
> I added these options. However, that didn't help. It seems as if the actual 
> submission wasn't even spawned since no container was created or no log was 
> found.
> 
>   Any ideas for this?
> 
> Thanks,
> Ron

Reply via email to