[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sunil Govindan updated HDFS-1915: - Fix Version/s: (was: 3.2.0) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch, HDFS-1915.004.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-1915: -- Issue Type: Improvement (was: New Feature) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: Improvement > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch, HDFS-1915.004.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-1915: -- Issue Type: Bug (was: Improvement) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: Bug > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch, HDFS-1915.004.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Status: Patch Available (was: In Progress) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch, HDFS-1915.004.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Status: In Progress (was: Patch Available) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch, HDFS-1915.004.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Attachment: HDFS-1915.004.patch > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch, HDFS-1915.004.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Status: Patch Available (was: In Progress) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Attachment: HDFS-1915.003.patch > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch, > HDFS-1915.003.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Status: In Progress (was: Patch Available) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Status: In Progress (was: Patch Available) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Attachment: HDFS-1915.002.patch > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Status: Patch Available (was: In Progress) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch, HDFS-1915.002.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Status: Patch Available (was: In Progress) > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Fix Version/s: 3.2.0 > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-1915.001.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Singh updated HDFS-1915: --- Attachment: HDFS-1915.001.patch > fuse-dfs does not support append > > > Key: HDFS-1915 > URL: https://issues.apache.org/jira/browse/HDFS-1915 > Project: Hadoop HDFS > Issue Type: New Feature > Components: fuse-dfs >Affects Versions: 0.20.2 > Environment: Ubuntu 10.04 LTS on EC2 >Reporter: Sampath K >Assignee: Pranay Singh >Priority: Major > Attachments: HDFS-1915.001.patch > > > Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name > node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using > fuse-dfs. > Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the > same, I get the following error. I am using vsFTPd on the server. > Changed the mounted folder permissions to a+w to rule out any WRITE > permission issues. I was able to do a FTP GET on the same mounted > volume. > Please advise > FTPd Log > == > Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1" > Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1" > Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec > Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1", > "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec > Error in Namenode Log (I did a ftp GET on counter.txt and PUT with > counter1.txt) > === > 2011-05-11 01:03:02,822 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:02,825 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,275 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root > ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null > 2011-05-11 01:03:20,290 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser > ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null > perm=null > 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.startFile: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler > 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from > 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to > non-existent file /upload/counter1.txt on client 10.32.77.36 > java.io.FileNotFoundException: failed to append to non-existent file > /upload/counter1.txt on client 10.32.77.36 > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) > No activity shows up in datanode logs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-1915) fuse-dfs does not support append
[ https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins updated HDFS-1915: -- Component/s: (was: name-node) contrib/fuse-dfs Issue Type: New Feature (was: Bug) Summary: fuse-dfs does not support append (was: Error in create file while using FUSE ) Fuse-dfs does not currently support append (which is what the FTP client is trying to do). The fuse-dfs code in CDH3 btw is essentially the same as what's in trunk, the development is done on trunk first. fuse-dfs does not support append Key: HDFS-1915 URL: https://issues.apache.org/jira/browse/HDFS-1915 Project: Hadoop HDFS Issue Type: New Feature Components: contrib/fuse-dfs Affects Versions: 0.20.2 Environment: Ubuntu 10.04 LTS on EC2 Reporter: Sampath K Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using fuse-dfs. Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the same, I get the following error. I am using vsFTPd on the server. Changed the mounted folder permissions to a+w to rule out any WRITE permission issues. I was able to do a FTP GET on the same mounted volume. Please advise FTPd Log == Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client 127.0.0.1 Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client 127.0.0.1 Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client 127.0.0.1, /hfsmnt/upload/counter.txt, 10 bytes, 0.42Kbyte/sec Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client 127.0.0.1, /hfsmnt/upload/counter1.txt, 0.00Kbyte/sec Error in Namenode Log (I did a ftp GET on counter.txt and PUT with counter1.txt) === 2011-05-11 01:03:02,822 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null 2011-05-11 01:03:02,825 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null 2011-05-11 01:03:20,275 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root ip=/10.32.77.36 cmd=listStatus src=/upload dst=nullperm=null 2011-05-11 01:03:20,290 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser ip=/10.32.77.36 cmd=opensrc=/upload/counter.txt dst=null perm=null 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR* NameSystem.startFile: failed to append to non-existent file /upload/counter1.txt on client 10.32.77.36 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to non-existent file /upload/counter1.txt on client 10.32.77.36 java.io.FileNotFoundException: failed to append to non-existent file /upload/counter1.txt on client 10.32.77.36 at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336) at org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415) at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115) at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409) No activity shows up in datanode logs. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira