[ 
https://issues.apache.org/jira/browse/HADOOP-12430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17384655#comment-17384655
 ] 

Hadoop QA commented on HADOOP-12430:
------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
21s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 2 
new or modified test files. {color} |
|| || || || {color:brown} HADOOP-17800 Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 12m 
45s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
38s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m  
9s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
22s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
 5s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
30s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
24m 50s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
7s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
8s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 41m 
23s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  9m 
21s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
23s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
23s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
23s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
24s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 
24s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
32s{color} | {color:green}{color} | {color:green} root: The patch generated 0 
new + 53 unchanged - 2 fixed = 53 total (was 55) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  4m 
37s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
18m 54s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m  
4s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m  
0s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green} 12m 
51s{color} | {color:green}{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} || ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 21m 
24s{color} | {color:green}{color} | {color:green} hadoop-common in the patch 
passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green}{color} | {color:green} hadoop-hdfs-client in the 
patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}393m 38s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/213/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:red} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
 5s{color} | {color:green}{color} | {color:green} The patch does not generate 
ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}656m 49s{color} | 
{color:black}{color} | {color:black}{color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestLeaseRecovery2 |
|   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS |
|   | hadoop.hdfs.server.datanode.TestBlockScanner |
|   | hadoop.hdfs.TestDFSClientRetries |
|   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
|   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
|   | hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage |
|   | hadoop.hdfs.security.TestDelegationTokenForProxyUser |
|   | hadoop.hdfs.TestStateAlignmentContextWithHA |
|   | hadoop.hdfs.TestSnapshotCommands |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
|   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
|   | hadoop.hdfs.server.namenode.TestAddOverReplicatedStripedBlocks |
|   | hadoop.fs.viewfs.TestViewFileSystemOverloadSchemeWithHdfsScheme |
|   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
|   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/213/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-12430 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13030881/HDFS-8078-HADOOP-17800.002.patch
 |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle spotbugs |
| uname | Linux 8a27e7e39d65 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | HADOOP-17800 / 1f1c38bbe08 |
| Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
| Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
|  Test Results | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/213/testReport/ |
| Max. process+thread count | 2823 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/213/console |
| versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
| Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |


This message was automatically generated.



> Fix HDFS client gets errors trying to to connect to IPv6 DataNode
> -----------------------------------------------------------------
>
>                 Key: HADOOP-12430
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12430
>             Project: Hadoop Common
>          Issue Type: Sub-task
>    Affects Versions: 2.6.0
>            Reporter: Nate Edel
>            Assignee: Nate Edel
>            Priority: Major
>              Labels: BB2015-05-TBR, ipv6
>         Attachments: HDFS-8078-HADOOP-17800.001.patch, 
> HDFS-8078-HADOOP-17800.002.patch, HDFS-8078.10.patch, HDFS-8078.11.patch, 
> HDFS-8078.12.patch, HDFS-8078.13.patch, HDFS-8078.14.patch, 
> HDFS-8078.15.patch, HDFS-8078.9.patch, dummy.patch
>
>
> 1st exception, on put:
> 15/03/23 18:43:18 WARN hdfs.DFSClient: DataStreamer Exception
> java.lang.IllegalArgumentException: Does not contain a valid host:port 
> authority: 2401:db00:1010:70ba:face:0:8:0:50010
>       at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:212)
>       at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>       at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:153)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1607)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1408)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
>       at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
> Appears to actually stem from code in DataNodeID which assumes it's safe to 
> append together (ipaddr + ":" + port) -- which is OK for IPv4 and not OK for 
> IPv6.  NetUtils.createSocketAddr( ) assembles a Java URI object, which 
> requires the format proto://[2401:db00:1010:70ba:face:0:8:0]:50010
> Currently using InetAddress.getByName() to validate IPv6 (guava 
> InetAddresses.forString has been flaky) but could also use our own parsing. 
> (From logging this, it seems like a low-enough frequency call that the extra 
> object creation shouldn't be problematic, and for me the slight risk of 
> passing in bad input that is not actually an IPv4 or IPv6 address and thus 
> calling an external DNS lookup is outweighed by getting the address 
> normalized and avoiding rewriting parsing.)
> Alternatively, sun.net.util.IPAddressUtil.isIPv6LiteralAddress()
> -------
> 2nd exception (on datanode)
> 15/04/13 13:18:07 ERROR datanode.DataNode: 
> dev1903.prn1.facebook.com:50010:DataXceiver error processing unknown 
> operation  src: /2401:db00:20:7013:face:0:7:0:54152 dst: 
> /2401:db00:11:d010:face:0:2f:0:50010
> java.io.EOFException
>         at java.io.DataInputStream.readShort(DataInputStream.java:315)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.readOp(Receiver.java:58)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:226)
>         at java.lang.Thread.run(Thread.java:745)
> Which also comes as client error "-get: 2401 is not an IP string literal."
> This one has existing parsing logic which needs to shift to the last colon 
> rather than the first.  Should also be a tiny bit faster by using lastIndexOf 
> rather than split.  Could alternatively use the techniques above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to