[
https://issues.apache.org/jira/browse/HDFS-411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Harsh J updated HDFS-411:
-------------------------
Resolution: Duplicate
Status: Resolved (was: Patch Available)
This was resolved by HDFS-856. Marking as duplicate.
> parameter dfs.replication is not reflected when put file into hadoop with
> fuse-dfs
> ----------------------------------------------------------------------------------
>
> Key: HDFS-411
> URL: https://issues.apache.org/jira/browse/HDFS-411
> Project: Hadoop HDFS
> Issue Type: Bug
> Environment: os:centos5.2
> cpu:amd64
> hadoop0.19.0
> Reporter: weimin zhu
> Attachments: HADOOP-4877.txt.0.19, HADOOP-4877.txt.trunk
>
>
> the $HADOOP_CONF_DIR is exist in the $CLASSPATH
> and the dfs.replication is set to 3 with the following in hadoop-site.xml
> <property>
> <name>dfs.replication</name>
> <value>1</value>
> </property>
> The file's replication is 3 when it be put into hadoop.
> I think the reason is :
> there is a hardcoding in the src\contrib\fuse-dfs\src\fuse_dfs.c
> line 1337
> if ((fh->hdfsFH = (hdfsFile)hdfsOpenFile(fh->fs, path, flags, 0, 3, 0)) ==
> NULL) {
> line 1591
> if ((file = (hdfsFile)hdfsOpenFile(userFS, path, flags, 0, 3, 0)) == NULL) {
> the fifth parameter is a hardcoding when call the function of hdfsOpenFile.
> It is should set to 0. It is as follows.
> line 1337
> if ((fh->hdfsFH = (hdfsFile)hdfsOpenFile(fh->fs, path, flags, 0, 0, 0)) ==
> NULL) {
> line 1591
> if ((file = (hdfsFile)hdfsOpenFile(userFS, path, flags, 0, 0, 0)) == NULL) {
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira