I know it, thanks, but it's non reliable solution.

2017-03-26 5:23 GMT+02:00 Jianfeng (Jeff) Zhang <jzh...@hortonworks.com>:

>
> You can try to specify the namenode address for hdfs file. e.g
>
> spark.read.csv(“hdfs://localhost:9009/file”)
>
> Best Regard,
> Jeff Zhang
>
>
> From: Serega Sheypak <serega.shey...@gmail.com>
> Reply-To: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
> Date: Sunday, March 26, 2017 at 2:47 AM
> To: "users@zeppelin.apache.org" <users@zeppelin.apache.org>
> Subject: Setting Zeppelin to work with multiple Hadoop clusters when
> running Spark.
>
> Hi, I have three hadoop clusters. Each cluster has it's own NN HA
> configured and YARN.
> I want to allow user to read from ant cluster and write to any cluster.
> Also user should be able to choose where to run is spark job.
> What is the right way to configure it in Zeppelin?
>
>

Reply via email to