[
https://issues.apache.org/jira/browse/HDFS-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16972577#comment-16972577
]
Hadoop QA commented on HDFS-14442:
----------------------------------
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m
0s{color} | {color:green} The patch appears to include 1 new or modified test
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
17m 18s{color} | {color:green} branch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m
13s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}
0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch
generated 2 new + 22 unchanged - 0 fixed = 24 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m
0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}
14m 38s{color} | {color:green} patch has no errors when building and testing
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 30s{color}
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m
34s{color} | {color:green} The patch does not generate ASF License warnings.
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 14s{color} |
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests |
hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
| | hadoop.hdfs.server.datanode.TestBlockHasMultipleReplicasOnSameDN |
| | hadoop.hdfs.server.datanode.TestDataNodeReconfiguration |
| | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
| | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
| | hadoop.hdfs.server.datanode.checker.TestDatasetVolumeCheckerTimeout |
| | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
| | hadoop.hdfs.server.mover.TestMover |
| | hadoop.hdfs.server.mover.TestStorageMover |
| | hadoop.hdfs.server.datanode.TestDataNodeLifeline |
| | hadoop.hdfs.server.blockmanagement.TestBlockInfoStriped |
| | hadoop.hdfs.server.blockmanagement.TestBlockReportRateLimiting |
| | hadoop.hdfs.server.datanode.TestBlockRecovery |
| | hadoop.hdfs.TestRollingUpgrade |
| | hadoop.hdfs.server.blockmanagement.TestPendingReconstruction |
| | hadoop.hdfs.server.blockmanagement.TestReplicationPolicy |
| | hadoop.hdfs.server.datanode.TestBatchIbr |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14442 |
| JIRA Patch URL |
https://issues.apache.org/jira/secure/attachment/12985620/HDFS-14442.003.patch |
| Optional Tests | dupname asflicense compile javac javadoc mvninstall
mvnsite unit shadedclient findbugs checkstyle |
| uname | Linux 3f6802aff991 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fb512f5 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
| checkstyle |
https://builds.apache.org/job/PreCommit-HDFS-Build/28294/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt
|
| unit |
https://builds.apache.org/job/PreCommit-HDFS-Build/28294/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
|
| Test Results |
https://builds.apache.org/job/PreCommit-HDFS-Build/28294/testReport/ |
| Max. process+thread count | 3057 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U:
hadoop-hdfs-project/hadoop-hdfs |
| Console output |
https://builds.apache.org/job/PreCommit-HDFS-Build/28294/console |
| Powered by | Apache Yetus 0.8.0 http://yetus.apache.org |
This message was automatically generated.
> Disagreement between HAUtil.getAddressOfActive and
> RpcInvocationHandler.getConnectionId
> ---------------------------------------------------------------------------------------
>
> Key: HDFS-14442
> URL: https://issues.apache.org/jira/browse/HDFS-14442
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.3.0
> Reporter: Erik Krogen
> Assignee: Ravuri Sushma sree
> Priority: Major
> Attachments: HDFS-14442.001.patch, HDFS-14442.002.patch,
> HDFS-14442.003.patch
>
>
> While working on HDFS-14245, we noticed a discrepancy in some proxy-handling
> code.
> The description of {{RpcInvocationHandler.getConnectionId()}} states:
> {code}
> /**
> * Returns the connection id associated with the InvocationHandler instance.
> * @return ConnectionId
> */
> ConnectionId getConnectionId();
> {code}
> It does not make any claims about whether this connection ID will be an
> active proxy or not. Yet in {{HAUtil}} we have:
> {code}
> /**
> * Get the internet address of the currently-active NN. This should rarely
> be
> * used, since callers of this method who connect directly to the NN using
> the
> * resulting InetSocketAddress will not be able to connect to the active NN
> if
> * a failover were to occur after this method has been called.
> *
> * @param fs the file system to get the active address of.
> * @return the internet address of the currently-active NN.
> * @throws IOException if an error occurs while resolving the active NN.
> */
> public static InetSocketAddress getAddressOfActive(FileSystem fs)
> throws IOException {
> if (!(fs instanceof DistributedFileSystem)) {
> throw new IllegalArgumentException("FileSystem " + fs + " is not a
> DFS.");
> }
> // force client address resolution.
> fs.exists(new Path("/"));
> DistributedFileSystem dfs = (DistributedFileSystem) fs;
> DFSClient dfsClient = dfs.getClient();
> return RPC.getServerAddress(dfsClient.getNamenode());
> }
> {code}
> Where the call {{RPC.getServerAddress()}} eventually terminates into
> {{RpcInvocationHandler#getConnectionId()}}, via {{RPC.getServerAddress()}} ->
> {{RPC.getConnectionIdForProxy()}} ->
> {{RpcInvocationHandler#getConnectionId()}}. {{HAUtil}} appears to be making
> an incorrect assumption that {{RpcInvocationHandler}} will necessarily return
> an _active_ connection ID. {{ObserverReadProxyProvider}} demonstrates a
> counter-example to this, since the current connection ID may be pointing at,
> for example, an Observer NameNode.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]