Hi, I recently responded to a question about this on stackoverflow ( http://stackoverflow.com/questions/33313041/is-it-possible-to-run-zeppelin-with-spark-yarn-cluster/33318160#33318160 ). The guy raise an interesting use case for the need which I don't know how to answer:
"The reason being, every time zeppelin needs restart, executors gets killed and hence cache is lost. To avoid this behavior, spark should run as yarn-cluster so that driver run in application master" I think we should look at this comments and see: 1) Can we do yarn-cluster? 2) Why does he need to restart zeppelin so much as it effect his cache 3) any other solution. Does anyone have an idea? Eran -- Eran | "You don't need eyes to see, you need vision" (Faithless)