[
https://issues.apache.org/jira/browse/SPARK-36810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18060529#comment-18060529
]
Takanobu Asanuma commented on SPARK-36810:
------------------------------------------
When we set spark.hadoop.dfs.client.failover.observer.auto-msync-period.* to
driver and executors, the value becomes the same for both driver and executors,
so we couldn’t set different values per process.
Here’s what worked for us:
hdfs-site.xml
{code:xml}
<property>
<name>msync.period</name>
<value>-1</value>
</property>
<property>
<name>dfs.client.failover.observer.auto-msync-period.<cluster_name></name>
<value>${msync.period}</value>
</property> {code}
spark-defaults.conf
{code}
spark.driver.defaultJavaOptions=-Dmsync.period=0
spark.executor.defaultJavaOptions=-Dmsync.period=1000ms
{code}
> Handle HDFS read inconsistencies on Spark when observer Namenode is used
> ------------------------------------------------------------------------
>
> Key: SPARK-36810
> URL: https://issues.apache.org/jira/browse/SPARK-36810
> Project: Spark
> Issue Type: Bug
> Components: Spark Core, SQL
> Affects Versions: 3.2.0
> Reporter: Venkata krishnan Sowrirajan
> Priority: Major
>
> In short, with HDFS HA and with the use of [Observer
> Namenode|https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html]
> the read-after-write consistency is only available when both the write and
> the read happens from the same client.
> But if the write happens on executor and the read happens on the driver, then
> the reads would be inconsistent causing application failure issues. This can
> be fixed by calling `FileSystem.msync` before making any read calls where the
> client thinks the write could have possibly happened elsewhere.
> This issue is discussed in greater detail in this
> [discussion|https://mail-archives.apache.org/mod_mbox/spark-dev/202108.mbox/browser]
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]