Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-26 Thread Serega Sheypak
Date: Sunday, March 26, 2017 at 7:47 PM > To: "users@zeppelin.apache.org" <users@zeppelin.apache.org> > Subject: Re: Setting Zeppelin to work with multiple Hadoop clusters when > running Spark. > > I know it, thanks, but it's non reliable solution. > > 2017-03-

Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-26 Thread Jianfeng (Jeff) Zhang
"users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>" <users@zeppelin.apache.org<mailto:users@zeppelin.apache.org>> Subject: Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark. I know it, thanks, but it's non reliable solution. 2017-03-26 5:23

Re: Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-26 Thread Serega Sheypak
ff Zhang > > > From: Serega Sheypak <serega.shey...@gmail.com> > Reply-To: "users@zeppelin.apache.org" <users@zeppelin.apache.org> > Date: Sunday, March 26, 2017 at 2:47 AM > To: "users@zeppelin.apache.org" <users@zeppelin.apache.org> > Subject:

Setting Zeppelin to work with multiple Hadoop clusters when running Spark.

2017-03-25 Thread Serega Sheypak
Hi, I have three hadoop clusters. Each cluster has it's own NN HA configured and YARN. I want to allow user to read from ant cluster and write to any cluster. Also user should be able to choose where to run is spark job. What is the right way to configure it in Zeppelin?