[
https://issues.apache.org/jira/browse/HDFS-12210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16143143#comment-16143143
]
Hadoop QA commented on HDFS-12210:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 2 new or modified test
files. {color} |
|| || || || {color:brown} HDFS-7240 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
37s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m
41s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
36s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
44s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
38s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m
39s{color} | {color:green} HDFS-7240 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
42s{color} | {color:green} HDFS-7240 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m
35s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 41s{color} | {color:orange} hadoop-hdfs-project: The patch generated 6 new +
1 unchanged - 0 fixed = 7 total (was 1) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 38s{color}
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 43s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
17s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}106m 50s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.scm.TestArchive |
| | hadoop.ozone.web.client.TestKeys |
| | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
| Timed out junit tests | org.apache.hadoop.ozone.scm.node.TestQueryNode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HDFS-12210 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12883972/HDFS-12210-HDFS-7240.001.patch
|
| Optional Tests | asflicense compile javac javadoc mvninstall mvnsite
unit findbugs checkstyle xml |
| uname | Linux c1b4bc3f8f23 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh
|
| git revision | HDFS-7240 / 1586f20 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/20884/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/20884/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs-client.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/20884/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/20884/testReport/ |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-client
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/20884/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org |
This message was automatically generated.
> Block Storage: volume creation times out while creating 3TB volume because of
> too many containers
> -------------------------------------------------------------------------------------------------
>
> Key: HDFS-12210
> URL: https://issues.apache.org/jira/browse/HDFS-12210
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: ozone
> Affects Versions: HDFS-7240
> Reporter: Mukul Kumar Singh
> Assignee: Mukul Kumar Singh
> Fix For: HDFS-7240
>
> Attachments: HDFS-12210-HDFS-7240.001.patch
>
>
> Volume creation times out while creating 3TB volume because of too many
> containers
> {code}
> [hdfs@ctr-e134-1499953498516-64773-01-000003 ~]$
> /opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/bin/hdfs cblock -c bilbo disk1 3TB 4
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in
> [jar:file:/opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in
> [jar:file:/opt/hadoop/hadoop-3.0.0-beta1-SNAPSHOT/share/hadoop/hdfs/lib/logback-classic-1.0.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> 17/07/28 09:32:40 WARN util.NativeCodeLoader: Unable to load native-hadoop
> library for your platform... using builtin-java classes where applicable
> 17/07/28 09:32:40 INFO cli.CBlockCli: create volume:[bilbo, disk1, 3TB, 4]
> 17/07/28 09:33:10 ERROR cli.CBlockCli: java.net.SocketTimeoutException: Call
> From ctr-e134-1499953498516-64773-01-000003.hwx.site/172.27.51.64 to
> 0.0.0.0:9810 failed on socket timeout exception:
> java.net.SocketTimeoutException: 30000 millis timeout while waiting for
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected
> local=/172.27.51.64:59317 remote=/0.0.0.0:9810]; For more details see:
> http://wiki.apache.org/hadoop/SocketTimeout
> {code}
> Looking into the logs it can be seen that the volume 614 containers were
> created before the timeout.
> {code}
> 2017-07-28 09:32:40,853 INFO org.apache.hadoop.cblock.CBlockManager: Create
> volume received: userName: bilbo volumeName: disk1 volumeSize: 3298534883328
> blockSize: 4096
> 2017-07-28 09:32:42,545 INFO
> org.apache.hadoop.scm.client.ContainerOperationClient: Created container
> bilbo:disk1#0 leader:172.27.50.192:9866 machines:[172.27.50.192:9866]
> replication factor:1
> 2017-07-28 09:32:43,213 INFO
> org.apache.hadoop.scm.client.ContainerOperationClient: Created container
> bilbo:disk1#1 leader:172.27.51.65:9866 machines:[172.27.51.65:9866]
> replication factor:1
> 2017-07-28 09:32:43,484 INFO
> org.apache.hadoop.scm.client.ContainerOperationClient: Created container
> bilbo:disk1#2 leader:172.27.50.192:9866 machines:[172.27.50.192:9866]
> replication factor:1
> .
> .
> .
> .
> 2017-07-28 09:35:01,712 INFO
> org.apache.hadoop.scm.client.ContainerOperationClient: Created container
> bilbo:disk1#612 leader:172.27.50.128:9866 machines:[172.27.50.128:9866]
> replication factor:1
> 2017-07-28 09:35:01,963 INFO
> org.apache.hadoop.scm.client.ContainerOperationClient: Created container
> bilbo:disk1#613 leader:172.27.50.128:9866 machines:[172.27.50.128:9866]
> replication factor:1
> 2017-07-28 09:35:02,256 INFO
> org.apache.hadoop.scm.client.ContainerOperationClient: Created container
> bilbo:disk1#614 leader:172.27.50.192:9866 machines:[172.27.50.192:9866]
> replication factor:1
> 2017-07-28 09:35:02,358 INFO org.apache.hadoop.cblock.CBlockManager: Create
> volume received: userName: bilbo volumeName: disk2 volumeSize: 1099511627776
> blockSize: 4096
> 2017-07-28 09:35:02,368 WARN org.apache.hadoop.ipc.Server: IPC Server handler
> 0 on 9810, call Call#0 Retry#0
> org.apache.hadoop.cblock.protocolPB.CBlockServiceProtocol.createVolume from
> 172.27.51.64:59
> 317: output error
> 2017-07-28 09:35:02,369 INFO org.apache.hadoop.ipc.Server: IPC Server handler
> 0 on 9810 caught an exception
> java.nio.channels.ClosedChannelException
> at
> sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:270)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:461)
> at org.apache.hadoop.ipc.Server.channelWrite(Server.java:3242)
> at org.apache.hadoop.ipc.Server.access$1700(Server.java:137)
> at
> org.apache.hadoop.ipc.Server$Responder.processResponse(Server.java:1466)
> at org.apache.hadoop.ipc.Server$Responder.doRespond(Server.java:1536)
> at
> org.apache.hadoop.ipc.Server$Connection.sendResponse(Server.java:2586)
> at
> org.apache.hadoop.ipc.Server$Connection.access$300(Server.java:1608)
> at org.apache.hadoop.ipc.Server$RpcCall.doResponse(Server.java:933)
> at org.apache.hadoop.ipc.Server$Call.sendResponse(Server.java:767)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:878)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1962)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {code}
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]