[ 
https://issues.apache.org/jira/browse/SPARK-35816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17366085#comment-17366085
 ] 

Hyukjin Kwon commented on SPARK-35816:
--------------------------------------

Spark 2.4.x is EOL. Would you mind trying with 3.0+?

> Spark read write with multiple Hadoop HA cluster limitation
> -----------------------------------------------------------
>
>                 Key: SPARK-35816
>                 URL: https://issues.apache.org/jira/browse/SPARK-35816
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 2.4.3
>            Reporter: Anupam Jain
>            Priority: Major
>              Labels: hadoop-ha, spark-sql
>
> I have two Hadoop HA cluster: h1 and h2. Want to read from h1-HDFS and write 
> to h2-HDFS using spark. As both HDFS are in HA, so need to set spark hadoop 
> configuration with HDFS details
> {code:java}
> spark.sparkContext().hadoopConfiguration().set(<HADOOP_RPC_ADDRESS_AND_DETAILS>){code}
> So with a single spark session job one of the Hadoop configuration will 
> overwrite with write details and will try to read from that configuration, 
> resulting in no file/path found.
> Similar thing will happen with HDFS to external Hive write(I am writing in 
> external Hive table owned HDFS), but more keen on above problem solution



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to