[
https://issues.apache.org/jira/browse/HADOOP-2905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12587456#action_12587456
]
Hadoop QA commented on HADOOP-2905:
-----------------------------------
+1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12379777/HADOOP-2905-1.patch
against trunk revision 645773.
@author +1. The patch does not contain any @author tags.
tests included +1. The patch appears to include 6 new or modified tests.
javadoc +1. The javadoc tool did not generate any warning messages.
javac +1. The applied patch does not generate any new javac compiler
warnings.
release audit +1. The applied patch does not generate any new release
audit warnings.
findbugs +1. The patch does not introduce any new Findbugs warnings.
core tests +1. The patch passed core unit tests.
contrib tests +1. The patch passed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2192/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2192/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2192/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2192/console
This message is automatically generated.
> fsck -move triggers NPE in namenode
> -----------------------------------
>
> Key: HADOOP-2905
> URL: https://issues.apache.org/jira/browse/HADOOP-2905
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.16.0
> Environment: hadoop-0.16 with dfs permissions disabled
> Reporter: Michael Bieniosek
> Assignee: lohit vijayarenu
> Attachments: HADOOP-2905-1.patch, HADOOP_2905.patch
>
>
> If I run hadoop fsck / -move, then the fsck fails to move any corrupt files.
> In the namenode logs, I see this error message repeated 3 times:
> 2008-02-26 21:19:07,500 INFO org.apache.hadoop.ipc.Server: IPC Server handler
> 5 on 10000, call mkdirs(/lost+found, null) from x.x.x.135:60819: error:
> java.io.IOException: java.lang.NullPointerException
> java.io.IOException: java.lang.NullPointerException
> at org.apache.hadoop.dfs.INode.setPermission(INode.java:123)
> at org.apache.hadoop.dfs.INode.setPermissionStatus(INode.java:86)
> at org.apache.hadoop.dfs.INode.<init>(INode.java:79)
> at org.apache.hadoop.dfs.INodeDirectory.<init>(INode.java:319)
> at org.apache.hadoop.dfs.FSDirectory.mkdirs(FSDirectory.java:633)
> at
> org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1569)
> at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1544)
> at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:420)
> at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
> at java.lang.reflect.Method.invoke(Unknown Source)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:910)
> 2008-02-26 21:19:07,503 WARN org.apache.hadoop.dfs.NameNode: Cannot
> initialize /lost+found .
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.