Github user jongyoul commented on the issue:

    https://github.com/apache/zeppelin/pull/3015
  
    @felixcheung AFAIK, in `yarn-client` mode, a job is created from scripts 
and added dependencies when making the job from a driver. Thus those jars are 
propagated to executors. In case of `yarn-cluster` mode, all dependencies would 
be passed to one of the nodes of yarn cluster, and then make the job. But, 
Spark wouldn't have those dependencies as it's not copied to the node where the 
job would be making.


---

Reply via email to