Hi Soheil,

We have a high availability cluster as well, but I never have to specify the 
active master when writing, only the cluster name. It works regardless of which 
node is the active master.

Hope that helps.

Thanks,
Subhash 

Sent from my iPhone

> On Jan 18, 2018, at 5:49 AM, Soheil Pourbafrani <soheil.i...@gmail.com> wrote:
> 
> I have a HDFS high available cluster with two namenode, one as active 
> namenode and one as standby namenode. When I want to write data to HDFS I use 
> the active namenode address. Now, my question is what happened if during 
> spark writing data active namenode fails. Is there any way to set both active 
> namenode and standby namenode in spark for writing data?

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to