[
https://issues.apache.org/jira/browse/HBASE-8349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13632462#comment-13632462
]
Hadoop QA commented on HBASE-8349:
----------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12578840/hbase-8349.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 3 new
or modified tests.
{color:green}+1 hadoop2.0{color}. The patch compiles against the hadoop
2.0 profile.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 lineLengths{color}. The patch does not introduce lines
longer than 100
{color:green}+1 site{color}. The mvn site goal succeeds with this patch.
{color:green}+1 core tests{color}. The patch passed unit tests in .
Test results:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//testReport/
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output:
https://builds.apache.org/job/PreCommit-HBASE-Build/5322//console
This message is automatically generated.
> TestLogRolling#TestLogRollOnDatanodeDeath hangs under hadoop2 profile
> ---------------------------------------------------------------------
>
> Key: HBASE-8349
> URL: https://issues.apache.org/jira/browse/HBASE-8349
> Project: HBase
> Issue Type: Sub-task
> Components: hadoop2
> Affects Versions: 0.98.0, 0.95.0
> Reporter: Jonathan Hsieh
> Assignee: Jonathan Hsieh
> Fix For: 0.98.0, 0.95.1
>
> Attachments: hbase-8349.patch
>
>
> TestLogRolling has been hanging -- after a data node is killed the client
> attempts to recover a lease and fails forever. (This example ran for a while
> and shows recovery attempt 541888).
> {code}
> 2013-04-15 16:37:49,074 INFO [SplitLogWorker-localhost,39898,1366065830907]
> util.FSHDFSUtils(72): Attempt 541888 to recoverLease on file
> hdfs://localhost:41333/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> returned false, trying for 2642865ms
> 2013-04-15 16:37:49,075 ERROR [SplitLogWorker-localhost,39898,1366065830907]
> util.FSHDFSUtils(86): Can't recoverLease after 541888 attempts and 2642866ms
> for
> hdfs://localhost:41333/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> - continuing without the lease, but we could have a data loss.
> 2013-04-15 16:37:49,075 INFO [IPC Server handler 9 on 41333]
> namenode.FSNamesystem(1957): recoverLease: recover lease [Lease. Holder:
> DFSClient_hb_rs_localhost,39898,1366065830907_1890639591_1091,
> pendingcreates: 1],
> src=/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> from client DFSClient_hb_rs_localhost,39898,1366065830907_1890639591_1091
> 2013-04-15 16:37:49,075 INFO [IPC Server handler 9 on 41333]
> namenode.FSNamesystem(2981): Recovering lease=[Lease. Holder:
> DFSClient_hb_rs_localhost,39898,1366065830907_1890639591_1091,
> pendingcreates: 1],
> src=/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> 2013-04-15 16:37:49,078 WARN [IPC Server handler 9 on 41333]
> namenode.FSNamesystem(3096): DIR* NameSystem.internalReleaseLease: File
> /user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> has not been closed. Lease recovery is in progress. RecoveryId = 543317 for
> block blk_7636447875270454121_1019{blockUCState=UNDER_RECOVERY,
> primaryNodeIndex=1, replicas=[ReplicaUnderConstruction[127.0.0.1:38288|RBW],
> ReplicaUnderConstruction[127.0.0.1:35956|RWR]]}
> 2013-04-15 16:37:49,078 INFO [SplitLogWorker-localhost,39898,1366065830907]
> util.FSHDFSUtils(72): Attempt 541889 to recoverLease on file
> hdfs://localhost:41333/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> returned false, trying for 2642869ms
> 2013-04-15 16:37:49,079 ERROR [SplitLogWorker-localhost,39898,1366065830907]
> util.FSHDFSUtils(86): Can't recoverLease after 541889 attempts and 2642870ms
> for
> hdfs://localhost:41333/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> - continuing without the lease, but we could have a data loss.
> 2013-04-15 16:37:49,079 INFO [IPC Server handler 4 on 41333]
> namenode.FSNamesystem(1957): recoverLease: recover lease [Lease. Holder:
> DFSClient_hb_rs_localhost,39898,1366065830907_1890639591_1091,
> pendingcreates: 1],
> src=/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> from client DFSClient_hb_rs_localhost,39898,1366065830907_1890639591_1091
> 2013-04-15 16:37:49,079 INFO [IPC Server handler 4 on 41333]
> namenode.FSNamesystem(2981): Recovering lease=[Lease. Holder:
> DFSClient_hb_rs_localhost,39898,1366065830907_1890639591_1091,
> pendingcreates: 1],
> src=/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> 2013-04-15 16:37:49,082 WARN [IPC Server handler 4 on 41333]
> namenode.FSNamesystem(3096): DIR* NameSystem.internalReleaseLease: File
> /user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> has not been closed. Lease recovery is in progress. RecoveryId = 543318 for
> block blk_7636447875270454121_1019{blockUCState=UNDER_RECOVERY,
> primaryNodeIndex=1, replicas=[ReplicaUnderConstruction[127.0.0.1:38288|RBW],
> ReplicaUnderConstruction[127.0.0.1:35956|RWR]]}
> 2013-04-15 16:37:49,083 INFO [SplitLogWorker-localhost,39898,1366065830907]
> util.FSHDFSUtils(72): Attempt 541890 to recoverLease on file
> hdfs://localhost:41333/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> returned false, trying for 2642874ms
> 2013-04-15 16:37:49,083 ERROR [SplitLogWorker-localhost,39898,1366065830907]
> util.FSHDFSUtils(86): Can't recoverLease after 541890 attempts and 2642874ms
> for
> hdfs://localhost:41333/user/jon/hbase/.logs/localhost,41341,1366065830879-splitting/localhost%2C41341%2C1366065830879.1366065836654.meta
> - continuing without the lease, but we could have a data loss.
> {code}
> It initially starts with permissions errors similar to HBASE-7636. For now
> we disable in this test and will address with a fix in HBASE-8337 with an
> assist from HDFS folks.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira