[
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16619808#comment-16619808
]
Pranay Singh commented on HDFS-1915:
------------------------------------
In the latest version when using the dfs fuse filesystem on single cluster
setup I see the below exception is generated
when a file is appended. I have used the below test case to do an append to
the file foo.
fuse_dfs on /mnt/hdfs type fuse.fuse_dfs
(rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,default_permissions,allow_other)
/mnt/hdfs#echo "This is test" > foo
/mnt/hdfs#echo "This is append" >>foo
2018-09-18 10:18:53,327 WARN [Thread-9] hdfs.DataStreamer
(DataStreamer.java:run(826)) - DataStreamer Exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline
due to no more good datanodes being available to try.
(Nodes:
current=[DatanodeInfoWithStorage[127.0.0.1:9866,DS-2707e35e-38b9-473e-aa29-780d556e3a7b,DISK]],
original=[DatanodeInfoWithStorage[127.0.0.1:9866,DS-2707e35e-38b9-473e-aa29-780d556e3a7b,DISK]]).
The current failed datanode replacement policy is DEFAULT, and a client may
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy'
in its configuration.
at org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1304)
at
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1372)
at
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1598)
at
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1499)
at
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1481)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:720)
The reason for this exception is that there is a single data node running in
the setup, the code is expecting
another datanode to be added to the exiting pipeline. Since there is no
additional data node in the setup
an exception is thrown.
I'm using the below version of the Hadoop
Hadoop 3.2.0-SNAPSHOT
Source code repository git://github.com/apache/hadoop.git -r
b3161c4dd9367c68b30528a63c03756eaa32aaf9
Compiled by pranay on 2018-09-18T21:55Z
Compiled with protoc 2.5.0
>From source with checksum 3729197aa9714ae9dab9a8a6d8f
> fuse-dfs does not support append
> --------------------------------
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: fuse-dfs
> Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
> Reporter: Sampath K
> Assignee: Pranay Singh
> Priority: Major
>
> Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using
> fuse-dfs.
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE
> permission issues. I was able to do a FTP GET on the same mounted
> volume.
> Please advise
> FTPd Log
> ==============
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1",
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1",
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with
> counter1.txt)
> ===============================
> 2011-05-11 01:03:02,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser
> ip=/10.32.77.36 cmd=listStatus src=/upload dst=null perm=null
> 2011-05-11 01:03:02,825 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root
> ip=/10.32.77.36 cmd=listStatus src=/upload dst=null perm=null
> 2011-05-11 01:03:20,275 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root
> ip=/10.32.77.36 cmd=listStatus src=/upload dst=null perm=null
> 2011-05-11 01:03:20,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser
> ip=/10.32.77.36 cmd=open src=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR*
> NameSystem.startFile: failed to append to non-existent file
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file
> /upload/counter1.txt on client 10.32.77.36
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]