[ 
https://issues.apache.org/jira/browse/HDDS-420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16612523#comment-16612523
 ] 

Hadoop QA commented on HDDS-420:
--------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
53s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m  0s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 14s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
40s{color} | {color:green} container-service in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
39s{color} | {color:green} server-scm in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 |
| JIRA Issue | HDDS-420 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12939435/HDDS-420.002.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux af43a36a7e42 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 47b72c8 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1042/testReport/ |
| Max. process+thread count | 335 (vs. ulimit of 10000) |
| modules | C: hadoop-hdds/container-service hadoop-hdds/server-scm U: 
hadoop-hdds |
| Console output | 
https://builds.apache.org/job/PreCommit-HDDS-Build/1042/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> putKey failing with KEY_ALLOCATION_ERROR
> ----------------------------------------
>
>                 Key: HDDS-420
>                 URL: https://issues.apache.org/jira/browse/HDDS-420
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Manager
>            Reporter: Nilotpal Nandi
>            Assignee: Shashikant Banerjee
>            Priority: Blocker
>             Fix For: 0.2.1
>
>         Attachments: HDDS-420.000.patch, HDDS-420.001.patch, 
> HDDS-420.002.patch, all-node-ozone-logs-1536607597.tar.gz
>
>
> Here are the commands run :
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-000002 bin]# ./ozone oz -putKey 
> /fs-volume/fs-bucket/nn1 -file /etc/passwd
> 2018-09-09 15:39:31,131 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Create key failed, error:KEY_ALLOCATION_ERROR
> [root@ctr-e138-1518143905142-468367-01-000002 bin]#
> [root@ctr-e138-1518143905142-468367-01-000002 bin]# ./ozone fs -copyFromLocal 
> /etc/passwd /
> 2018-09-09 15:40:16,879 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> 2018-09-09 15:40:23,632 [main] ERROR - Try to allocate more blocks for write 
> failed, already allocated 0 blocks for this write.
> copyFromLocal: Message missing required fields: keyLocation
> [root@ctr-e138-1518143905142-468367-01-000002 bin]# ./ozone oz -putKey 
> /fs-volume/fs-bucket/nn2 -file /etc/passwd
> 2018-09-09 15:44:55,912 WARN util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> Create key failed, error:KEY_ALLOCATION_ERROR{noformat}
>  
> hadoop version :
> ---------------------------
> {noformat}
> [root@ctr-e138-1518143905142-468367-01-000002 bin]# ./hadoop version
> Hadoop 3.2.0-SNAPSHOT
> Source code repository git://git.apache.org/hadoop.git -r 
> bf8a1750e99cfbfa76021ce51b6514c74c06f498
> Compiled by root on 2018-09-08T10:22Z
> Compiled with protoc 2.5.0
> From source with checksum c5bbb375aed8edabd89c377af83189d
> This command was run using 
> /root/hadoop_trunk/ozone-0.3.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.2.0-SNAPSHOT.jar{noformat}
>  
> scm log :
> ---------------
> {noformat}
> 2018-09-09 15:45:00,907 INFO 
> org.apache.hadoop.hdds.scm.pipelines.ratis.RatisManagerImpl: Allocating a new 
> ratis pipeline of size: 3 id: pipelineId=f210716d-ba7b-4adf-91d6-da286e5fd010
> 2018-09-09 15:45:00,973 INFO org.apache.ratis.conf.ConfUtils: raft.rpc.type = 
> GRPC (default)
> 2018-09-09 15:45:01,007 INFO org.apache.ratis.conf.ConfUtils: 
> raft.grpc.message.size.max = 33554432 (custom)
> 2018-09-09 15:45:01,011 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.rpc.retryInterval = 300 ms (default)
> 2018-09-09 15:45:01,012 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.async.outstanding-requests.max = 100 (default)
> 2018-09-09 15:45:01,012 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.async.scheduler-threads = 3 (default)
> 2018-09-09 15:45:01,020 INFO org.apache.ratis.conf.ConfUtils: 
> raft.grpc.flow.control.window = 1MB (=1048576) (default)
> 2018-09-09 15:45:01,020 INFO org.apache.ratis.conf.ConfUtils: 
> raft.grpc.message.size.max = 33554432 (custom)
> 2018-09-09 15:45:01,102 INFO org.apache.ratis.conf.ConfUtils: 
> raft.client.rpc.request.timeout = 3000 ms (default)
> 2018-09-09 15:45:01,667 ERROR org.apache.hadoop.hdds.scm.XceiverClientRatis: 
> Failed to reinitialize 
> RaftPeer:bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9:172.27.12.96:9858 datanode: 
> bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9{ip: 172.27.12.96, host: 
> ctr-e138-1518143905142-468367-01-000007.hwx.site}
> org.apache.ratis.protocol.GroupMismatchException: 
> bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9: The group (group-7347726F7570) of 
> client-409D68EB500F does not match the group (group-2041ABBEE452) of the 
> server bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>  at 
> org.apache.ratis.util.ReflectionUtils.instantiateException(ReflectionUtils.java:222)
>  at 
> org.apache.ratis.grpc.RaftGrpcUtil.tryUnwrapException(RaftGrpcUtil.java:79)
>  at org.apache.ratis.grpc.RaftGrpcUtil.unwrapException(RaftGrpcUtil.java:67)
>  at 
> org.apache.ratis.grpc.client.RaftClientProtocolClient.blockingCall(RaftClientProtocolClient.java:127)
>  at 
> org.apache.ratis.grpc.client.RaftClientProtocolClient.reinitialize(RaftClientProtocolClient.java:102)
>  at 
> org.apache.ratis.grpc.client.GrpcClientRpc.sendRequest(GrpcClientRpc.java:77)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.sendRequest(RaftClientImpl.java:302)
>  at 
> org.apache.ratis.client.impl.RaftClientImpl.reinitialize(RaftClientImpl.java:216)
>  at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.reinitialize(XceiverClientRatis.java:163)
>  at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.reinitialize(XceiverClientRatis.java:133)
>  at 
> org.apache.hadoop.hdds.scm.XceiverClientRatis.createPipeline(XceiverClientRatis.java:97)
>  at 
> org.apache.hadoop.hdds.scm.pipelines.ratis.RatisManagerImpl.initializePipeline(RatisManagerImpl.java:105)
>  at 
> org.apache.hadoop.hdds.scm.pipelines.PipelineSelector.getReplicationPipeline(PipelineSelector.java:303)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerStateManager.allocateContainer(ContainerStateManager.java:299)
>  at 
> org.apache.hadoop.hdds.scm.container.ContainerMapping.allocateContainer(ContainerMapping.java:289)
>  at 
> org.apache.hadoop.hdds.scm.block.BlockManagerImpl.preAllocateContainers(BlockManagerImpl.java:167)
>  at 
> org.apache.hadoop.hdds.scm.block.BlockManagerImpl.allocateBlock(BlockManagerImpl.java:266)
>  at 
> org.apache.hadoop.hdds.scm.server.SCMBlockProtocolServer.allocateBlock(SCMBlockProtocolServer.java:143)
>  at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.allocateScmBlock(ScmBlockLocationProtocolServerSideTranslatorPB.java:74)
>  at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:6271)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
>  at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> Caused by: org.apache.ratis.shaded.io.grpc.StatusRuntimeException: INTERNAL: 
> bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9: The group (group-7347726F7570) of 
> client-409D68EB500F does not match the group (group-2041ABBEE452) of the 
> server bfe9c5f2-da9b-4a8f-9013-7540cbbed1c9
>  at 
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.toStatusRuntimeException(ClientCalls.java:222)
>  at 
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.getUnchecked(ClientCalls.java:203)
>  at 
> org.apache.ratis.shaded.io.grpc.stub.ClientCalls.blockingUnaryCall(ClientCalls.java:132)
>  at 
> org.apache.ratis.shaded.proto.grpc.AdminProtocolServiceGrpc$AdminProtocolServiceBlockingStub.reinitialize(AdminProtocolServiceGrpc.java:220)
>  at 
> org.apache.ratis.grpc.client.RaftClientProtocolClient.lambda$reinitialize$1(RaftClientProtocolClient.java:104)
>  at 
> org.apache.ratis.grpc.client.RaftClientProtocolClient.blockingCall(RaftClientProtocolClient.java:125)
>  ... 24 more{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to