[ 
https://issues.apache.org/jira/browse/HDFS-4505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14506445#comment-14506445
 ] 

Hadoop QA commented on HDFS-4505:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 50s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | javac |   7m 33s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 44s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   5m 25s | The applied patch generated  3 
 additional checkstyle issues. |
| {color:green}+1{color} | install |   1m 32s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m  6s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | native |   3m 19s | Pre-build of native portion |
| {color:red}-1{color} | hdfs tests | 165m 49s | Tests failed in hadoop-hdfs. |
| | | 212m 15s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.hdfs.server.namenode.TestFileTruncate |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12691753/HDFS-4505.002.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / e71d0d8 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10339/artifact/patchprocess/checkstyle-result-diff.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10339/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10339/testReport/ |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/10339//console |


This message was automatically generated.

> Balancer failure with nameservice configuration.
> ------------------------------------------------
>
>                 Key: HDFS-4505
>                 URL: https://issues.apache.org/jira/browse/HDFS-4505
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: balancer & mover
>    Affects Versions: 2.0.2-alpha
>         Environment: OS: Mac OS X Server 10.6.8/ Linux 2.6.32 x86_64
>            Reporter: QueryIO
>            Assignee: Chu Tong
>              Labels: balancer, hdfs
>         Attachments: HADOOP-9172.patch, HDFS-4505.002.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> This set of properties ...
> <property><name>dfs.namenode.https-address.NameNode1</name><value>192.168.0.10:50470</value></property>
> <property><name>dfs.namenode.http-address.NameNode1</name><value>192.168.0.10:50070</value></property>
> <property><name>dfs.namenode.rpc-address.NameNode1</name><value>192.168.0.10:9000</value></property>
> <property><name>dfs.nameservice.id</name><value>NameNode1</value></property>
> <property><name>dfs.nameservices</name><value>NameNode1</value></property>
> gives following issue while running balancer ...
> 2012-12-27 15:42:36,193 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
> namenodes = [hdfs://queryio10.local:9000, hdfs://192.168.0.10:9000]
> 2012-12-27 15:42:36,194 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
> p         = Balancer.Parameters[BalancingPolicy.Node, threshold=10.0]
> 2012-12-27 15:42:37,433 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
> new node: /default-rack/192.168.0.10:50010
> 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
> 0 over-utilized: []
> 2012-12-27 15:42:37,433 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
> 0 underutilized: []
> 2012-12-27 15:42:37,436 INFO org.apache.hadoop.net.NetworkTopology: Adding a 
> new node: /default-rack/192.168.0.10:50010
> 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
> 0 over-utilized: []
> 2012-12-27 15:42:37,436 INFO org.apache.hadoop.hdfs.server.balancer.Balancer: 
> 0 underutilized: []
> 2012-12-27 15:42:37,570 WARN org.apache.hadoop.hdfs.DFSClient: DataStreamer 
> Exception
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /system/balancer.id File does not exist. Holder 
> DFSClient_NONMAPREDUCE_1926739478_1 does not have any open files.
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2315)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2306)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2102)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:469)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:294)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43138)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1164)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>       at $Proxy10.addBlock(Unknown Source)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>       at $Proxy10.addBlock(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:285)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1150)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1003)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463)
> 2012-12-27 15:42:37,579 ERROR org.apache.hadoop.hdfs.DFSClient: Failed to 
> close file /system/balancer.id
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
>  No lease on /system/balancer.id File does not exist. Holder 
> DFSClient_NONMAPREDUCE_1926739478_1 does not have any open files.
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2315)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2306)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2102)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:469)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:294)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:43138)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:910)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1694)
>       at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1690)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1367)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1688)
>       at org.apache.hadoop.ipc.Client.call(Client.java:1164)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
>       at $Proxy10.addBlock(Unknown Source)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
>       at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
>       at $Proxy10.addBlock(Unknown Source)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:285)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1150)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1003)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to