[ 
https://issues.apache.org/jira/browse/HDFS-1320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12903801#action_12903801
 ] 

Erik Steffl commented on HDFS-1320:
-----------------------------------

HDFS-1320-0.22-3.patch is an update after some conflicting changes on trunk.

Given that Hudsonis not working at the moment I ran 'ant test-patch' and and 
'ant test' myself, the results are below.

ant test-patch results:

     [exec] There appear to be 97 release audit warnings before the patch and 
97 release audit warnings after applying the patch.
     [exec]
     [exec]
     [exec]
     [exec]
     [exec] -1 overall.
     [exec]
     [exec]     +1 @author.  The patch does not contain any @author tags.
     [exec]
     [exec]     +1 tests included.  The patch appears to include 28 new or 
modified tests.
     [exec]
     [exec]     -1 javadoc.  The javadoc tool appears to have generated 1 
warning messages.
     [exec]
     [exec]     +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
     [exec]
     [exec]     +1 findbugs.  The patch does not introduce any new Findbugs 
warnings.
     [exec]
     [exec]     +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
     [exec]
     [exec]

The javadoc warning is unrelated to patch:      [exec]   [javadoc] 
/home/steffl/work/svn.isDebugEnabled/hdfs-trunk/src/java/org/apache/hadoop/hdfs/server/datanode/metrics/FSDatasetMBean.java:40:
 warning - Tag @see: reference not found: 
org.apache.hadoop.hdfs.server.datanode.metrics.DataNodeStatisticsMBean

ant test results:

BUILD FAILED                                                                    
                                      │     [exec]
/home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:709: The following 
error occurred while executing this line:│     [exec]
/home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:477: The following 
error occurred while executing this line:│     [exec] There appear to be 1 
javadoc warnings generated by the patched build.
/home/steffl/work/svn.isDebugEnabled/hdfs-trunk/src/test/aop/build/aop.xml:229: 
The following error occurred while exe│     [exec]
/home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:667: The following 
error occurred while executing this line:│     [exec]
/home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:624: The following 
error occurred while executing this line:│     [exec] 
======================================================================
/home/steffl/work/svn.isDebugEnabled/hdfs-trunk/build.xml:692: Tests failed!    
                                      │     [exec] 
======================================================================

The failures are unrelated to this patch (two failures and one error).

Error:

Build log:

    [junit] Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
    [junit]     at org.junit.Assert.fail(Assert.java:91)
    [junit]     at org.junit.Assert.failNotEquals(Assert.java:645)
    [junit]     at org.junit.Assert.assertEquals(Assert.java:126)
    [junit]     at org.junit.Assert.assertEquals(Assert.java:470)
    [junit]     at 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:105)
    [junit]     at 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:88)
    [junit]     at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:29)
    [junit]     at org.mockito.internal.MockHandler.handle(MockHandler.java:95)
    [junit]     at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
    [junit]     at 
org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol$$EnhancerByMockitoWithCGLIB$$4e50a34e.getReplicaVisibleLength(<generated>)
    [junit]     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    [junit]     at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    [junit]     at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    [junit]     at java.lang.reflect.Method.invoke(Method.java:597)
    [junit]     at 
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:346)
    [junit]     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1378)
    [junit]     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1374)
    [junit]     at java.security.AccessController.doPrivileged(Native Method)
    [junit]     at javax.security.auth.Subject.doAs(Subject.java:396)
    [junit]     at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    [junit]     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1372)
    [junit] )
    [junit] Tests run: 4, Failures: 0, Errors: 1, Time elapsed: 1.286 sec

Details in 
build/test/TEST-org.apache.hadoop.hdfs.security.token.block.TestBlockToken.txt:

2010-08-27 16:43:39,167 INFO  ipc.Server (Server.java:run(1386)) - IPC Server 
handler 1 on 58724, call getReplicaVisibleLength(blk_-108_0) from 
127.0.1.1:47663: error: java.io.IOException: java.lang.AssertionError: Only one 
BlockTokenId
java.io.IOException: java.lang.AssertionError: Only one BlockTokenIdentifier 
expected expected:<1> but was:<0>
        at org.junit.Assert.fail(Assert.java:91)
        at org.junit.Assert.failNotEquals(Assert.java:645)
        at org.junit.Assert.assertEquals(Assert.java:126)
        at org.junit.Assert.assertEquals(Assert.java:470)
        at 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:105)
        at 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken$getLengthAnswer.answer(TestBlockToken.java:88)
        at 
org.mockito.internal.stubbing.StubbedInvocationMatcher.answer(StubbedInvocationMatcher.java:29)
        at org.mockito.internal.MockHandler.handle(MockHandler.java:95)
        at 
org.mockito.internal.creation.MethodInterceptorFilter.intercept(MethodInterceptorFilter.java:47)
        at 
org.apache.hadoop.hdfs.protocol.ClientDatanodeProtocol$$EnhancerByMockitoWithCGLIB$$4e50a34e.getReplicaVisibleLength(<generated>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at 
org.apache.hadoop.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:346)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1378)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1374)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1372)

Two failures:

Build log:

    [junit] Running org.apache.hadoop.hdfs.TestFiHFlush
    [junit] Tests run: 9, Failures: 2, Errors: 0, Time elapsed: 40.849 sec
    [junit] Test org.apache.hadoop.hdfs.TestFiHFlush FAILED

Details in build-fi/test/TEST-org.apache.hadoop.hdfs.TestFiHFlush.txt

2010-08-27 17:31:59,606 INFO  datanode.DataNode 
(FSDataset.java:registerMBean(1757)) - Registered FSDatasetStatusMBean
2010-08-27 17:31:59,608 WARN  datanode.DataNode 
(DataNode.java:registerMXBean(503)) - Failed to register NameNode MXBean
javax.management.InstanceAlreadyExistsException: HadoopInfo:type=DataNodeInfo
        at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
...

2010-08-27 17:32:00,390 INFO  datanode.DataNode 
(FSDataset.java:registerMBean(1757)) - Registered FSDatasetStatusMBean
2010-08-27 17:32:00,391 WARN  datanode.DataNode 
(DataNode.java:registerMXBean(503)) - Failed to register NameNode MXBean
javax.management.InstanceAlreadyExistsException: HadoopInfo:type=DataNodeInfo
        at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:453)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.internal_addObject(DefaultMBeanServerInterceptor.java:1484)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:963)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
        at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
        at 
com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
...

> Add LOG.isDebugEnabled() guard for each LOG.debug("...")
> --------------------------------------------------------
>
>                 Key: HDFS-1320
>                 URL: https://issues.apache.org/jira/browse/HDFS-1320
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 0.22.0
>            Reporter: Erik Steffl
>            Assignee: Erik Steffl
>             Fix For: 0.22.0
>
>         Attachments: HDFS-1320-0.22-1.patch, HDFS-1320-0.22-2.patch, 
> HDFS-1320-0.22-3.patch, HDFS-1320-0.22.patch
>
>
> Each LOG.debug("...") should be executed only if LOG.isDebugEnabled() is 
> true, in some cases it's expensive to construct the string that is being 
> printed to log. It's much easier to always use LOG.isDebugEnabled() because 
> it's easier to check (rather than in each case reason wheather it's 
> neccessary or not).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to