[
https://issues.apache.org/jira/browse/HDFS-1915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16620140#comment-16620140
]
Hadoop QA commented on HDFS-1915:
---------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m
0s{color} | {color:red} The patch doesn't appear to include any new or modified
tests. Please justify why no new tests are needed for this patch. Also please
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
41s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
12m 9s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
12m 17s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
25s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m
31s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
24s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m 56s{color} |
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDFS-1915 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12940346/HDFS-1915.001.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient findbugs checkstyle |
| uname | Linux 0537e01933f7 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14
08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fb85351 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/25097/testReport/ |
| Max. process+thread count | 335 (vs. ulimit of 10000) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client U:
hadoop-hdfs-project/hadoop-hdfs-client |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/25097/console |
| Powered by | Apache Yetus 0.8.0 http://yetus.apache.org |
This message was automatically generated.
> fuse-dfs does not support append
> --------------------------------
>
> Key: HDFS-1915
> URL: https://issues.apache.org/jira/browse/HDFS-1915
> Project: Hadoop HDFS
> Issue Type: New Feature
> Components: fuse-dfs
> Affects Versions: 0.20.2
> Environment: Ubuntu 10.04 LTS on EC2
> Reporter: Sampath K
> Assignee: Pranay Singh
> Priority: Major
> Fix For: 3.2.0
>
> Attachments: HDFS-1915.001.patch
>
>
> Environment: CloudEra CDH3, EC2 cluster with 2 data nodes and 1 name
> node(Using ubuntu 10.04 LTS large instances), mounted hdfs in OS using
> fuse-dfs.
> Able to do HDFS fs -put but when I try to use a FTP client(ftp PUT) to do the
> same, I get the following error. I am using vsFTPd on the server.
> Changed the mounted folder permissions to a+w to rule out any WRITE
> permission issues. I was able to do a FTP GET on the same mounted
> volume.
> Please advise
> FTPd Log
> ==============
> Tue May 10 23:45:00 2011 [pid 2] CONNECT: Client "127.0.0.1"
> Tue May 10 23:45:09 2011 [pid 1] [ftpuser] OK LOGIN: Client "127.0.0.1"
> Tue May 10 23:48:41 2011 [pid 3] [ftpuser] OK DOWNLOAD: Client "127.0.0.1",
> "/hfsmnt/upload/counter.txt", 10 bytes, 0.42Kbyte/sec
> Tue May 10 23:49:24 2011 [pid 3] [ftpuser] FAIL UPLOAD: Client "127.0.0.1",
> "/hfsmnt/upload/counter1.txt", 0.00Kbyte/sec
> Error in Namenode Log (I did a ftp GET on counter.txt and PUT with
> counter1.txt)
> ===============================
> 2011-05-11 01:03:02,822 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser
> ip=/10.32.77.36 cmd=listStatus src=/upload dst=null perm=null
> 2011-05-11 01:03:02,825 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root
> ip=/10.32.77.36 cmd=listStatus src=/upload dst=null perm=null
> 2011-05-11 01:03:20,275 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=root
> ip=/10.32.77.36 cmd=listStatus src=/upload dst=null perm=null
> 2011-05-11 01:03:20,290 INFO
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: ugi=ftpuser
> ip=/10.32.77.36 cmd=open src=/upload/counter.txt dst=null
> perm=null
> 2011-05-11 01:03:31,115 WARN org.apache.hadoop.hdfs.StateChange: DIR*
> NameSystem.startFile: failed to append to non-existent file
> /upload/counter1.txt on client 10.32.77.36
> 2011-05-11 01:03:31,115 INFO org.apache.hadoop.ipc.Server: IPC Server handler
> 7 on 9000, call append(/upload/counter1.txt, DFSClient_1590956638) from
> 10.32.77.36:56454: error: java.io.FileNotFoundException: failed to append to
> non-existent file /upload/counter1.txt on client 10.32.77.36
> java.io.FileNotFoundException: failed to append to non-existent file
> /upload/counter1.txt on client 10.32.77.36
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1166)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1336)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:596)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1415)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1411)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1115)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1409)
> No activity shows up in datanode logs.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]