[
https://issues.apache.org/jira/browse/ARROW-9226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17303201#comment-17303201
]
Antoine Pitrou commented on ARROW-9226:
---------------------------------------
[~wondertx] I don't think anybody on the current Arrow team has enough HDFS
expertise for this, so it won't happen unless someone contributes the code.
Do you want to submit a PR ?
https://arrow.apache.org/docs/developers/contributing.html
> [Python] pyarrow.fs.HadoopFileSystem - retrieve options from core-site.xml or
> hdfs-site.xml if available
> --------------------------------------------------------------------------------------------------------
>
> Key: ARROW-9226
> URL: https://issues.apache.org/jira/browse/ARROW-9226
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++, Python
> Affects Versions: 0.17.1
> Reporter: Bruno Quinart
> Priority: Minor
> Labels: hdfs
> Fix For: 4.0.0
>
>
> 'Legacy' pyarrow.hdfs.connect was somehow able to get the namenode info from
> the hadoop configuration files.
> The new pyarrow.fs.HadoopFileSystem requires the host to be specified.
> Inferring this info from "the environment" makes it easier to deploy
> pipelines.
> But more important, for HA namenodes it is almost impossible to know for sure
> what to specify. If a rolling restart is ongoing, the namenode is changing.
> There is no guarantee on which will be active in a HA setup.
> I tried connecting to the standby namenode. The connection gets established,
> but when writing a file an error is raised that standby namenodes are not
> allowed to write to.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)