[
https://issues.apache.org/jira/browse/HADOOP-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17381318#comment-17381318
]
Hadoop QA commented on HADOOP-12430:
------------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m
12s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m
0s{color} | {color:green}{color} | {color:green} No case conflicting files
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green}{color} | {color:green} The patch does not contain any
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green} 0m 0s{color}
| {color:green}test4tests{color} | {color:green} The patch appears to include 2
new or modified test files. {color} |
|| || || || {color:brown} HADOOP-17800 Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 12m
32s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m
37s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m
0s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m
55s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m
26s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m
5s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
27m 0s{color} | {color:green}{color} | {color:green} branch has no errors when
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m
49s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m
13s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 44m
25s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 10m
27s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m
30s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m
36s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m
1s{color} | {color:green}{color} | {color:green} the patch passed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 28m 1s{color}
|
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
| {color:red} root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 1 new + 1912 unchanged - 0
fixed = 1913 total (was 1912) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m
51s{color} | {color:green}{color} | {color:green} the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 23m 51s{color}
|
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
| {color:red} root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with
JDK Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 1 new + 1790
unchanged - 0 fixed = 1791 total (was 1790) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
4m 33s{color} |
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/artifact/out/diff-checkstyle-root.txt{color}
| {color:orange} root: The patch generated 40 new + 53 unchanged - 2 fixed =
93 total (was 55) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m
3s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green}{color} | {color:green} The patch has no whitespace
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
18m 12s{color} | {color:green}{color} | {color:green} patch has no errors when
building and testing our client artifacts. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m
51s{color} |
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/artifact/out/patch-javadoc-hadoop-hdfs-project_hadoop-hdfs-client-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
| {color:red} hadoop-hdfs-client in the patch failed with JDK
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m
30s{color} | {color:green}{color} | {color:green} the patch passed with JDK
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 9m
27s{color} | {color:green}{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 17s{color}
|
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt{color}
| {color:red} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m
47s{color} | {color:green}{color} | {color:green} hadoop-hdfs-client in the
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 30s{color}
|
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt{color}
| {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:blue}0{color} | {color:blue} asflicense {color} | {color:blue} 0m
42s{color} | {color:blue}{color} | {color:blue} ASF License check generated no
output? {color} |
| {color:black}{color} | {color:black} {color} | {color:black}322m 58s{color} |
{color:black}{color} | {color:black}{color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
| | hadoop.net.TestNetUtils |
| | hadoop.hdfs.TestFileChecksum |
| | hadoop.hdfs.TestMaintenanceState |
| | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
| | hadoop.hdfs.TestDFSStorageStateRecovery |
| | hadoop.hdfs.TestFileCorruption |
| | hadoop.hdfs.TestDecommissionWithBackoffMonitor |
| | hadoop.hdfs.TestViewDistributedFileSystem |
| | hadoop.hdfs.TestDecommission |
| | hadoop.hdfs.TestSetrepIncreasing |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base:
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/artifact/out/Dockerfile
|
| JIRA Issue | HADOOP-12430 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/13030651/HDFS-8078-HADOOP-17800.001.patch
|
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite
unit shadedclient findbugs checkstyle spotbugs |
| uname | Linux 2a3733414ee0 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | HADOOP-17800 / 1dd03cc4b57 |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Multi-JDK versions |
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04
/usr/lib/jvm/java-8-openjdk-amd64:Private
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Test Results |
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/testReport/ |
| Max. process+thread count | 2685 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output |
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/210/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
This message was automatically generated.
> Fix HDFS client gets errors trying to to connect to IPv6 DataNode
> -----------------------------------------------------------------
>
> Key: HADOOP-12430
> URL: https://issues.apache.org/jira/browse/HADOOP-12430
> Project: Hadoop Common
> Issue Type: Sub-task
> Affects Versions: 2.6.0
> Reporter: Nate Edel
> Assignee: Nate Edel
> Priority: Major
> Labels: BB2015-05-TBR, ipv6
> Attachments: HDFS-8078-HADOOP-17800.001.patch, HDFS-8078.10.patch,
> HDFS-8078.11.patch, HDFS-8078.12.patch, HDFS-8078.13.patch,
> HDFS-8078.14.patch, HDFS-8078.15.patch, HDFS-8078.9.patch, dummy.patch
>
>
> 1st exception, on put:
> 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: 2401:db00:1010:70ba:face:0:8:0:50010
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
> at
> org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
> at
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
> Appears to actually stem from code in DataNodeID which assumes it's safe to
> append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for
> IPv6. NetUtils.createSocketAddr( ) assembles a Java URI object, which
> requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
> Currently using InetAddress.getByName() to validate IPv6 (guava
> InetAddresses.forString has been flaky) but could also use our own parsing.
> (From logging this, it seems like a low-enough frequency call that the extra
> object creation shouldn't be problematic, and for me the slight risk of
> passing in bad input that is not actually an IPv4 or IPv6 address and thus
> calling an external DNS lookup is outweighed by getting the address
> normalized and avoiding rewriting parsing.)
> Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
> -------
> 2nd exception (on datanode)
> 15/04/13 13:18:07 ERROR datanode.DataNode:
> dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown
> operation src: /2401:db00:20:7013:face:0:7:0:54152 dst:
> /2401:db00:11:d010:face:0:2f:0:50010
> java.io.EOFException
> at java.io.DataInputStream.readShort(DataInputStream.java:315)
> at
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
> at java.lang.Thread.run(Thread.java:745)
> Which also comes as client error "-get: 2401 is not an IP string literal."
> This one has existing parsing logic which needs to shift to the last colon
> rather than the first. Should also be a tiny bit faster by using lastIndexOf
> rather than split. Could alternatively use the techniques above.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]