[
https://issues.apache.org/jira/browse/HADOOP-8041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13204274#comment-13204274
]
Todd Lipcon commented on HADOOP-8041:
-------------------------------------
My thinking is the following:
For a given proxy object, when we first have a successful RPC, we can set an
internal flag in the failover proxy provider indicating that it has connected
once. Then, whenever we do a failover, if that flag is set, then we should
print a warning. Otherwise, only print it at DEBUG level.
This would allow FsShell commands to not print warnings due to an "already been
failed over for a while" situation, but still cause an INFO msg to be printed
in MR tasks or HBase servers if a failover takes place while they're accessing
DFS.
> HA: log a warning when a failover is first attempted
> -----------------------------------------------------
>
> Key: HADOOP-8041
> URL: https://issues.apache.org/jira/browse/HADOOP-8041
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: ha
> Affects Versions: HA Branch (HDFS-1623)
> Reporter: Eli Collins
>
> Currently we always warn for each client operation made to a NN we've failed
> over to:
> {noformat}
> hadoop-0.24.0-SNAPSHOT $ ./bin/hdfs dfs -lsr /
> 12/02/08 17:43:04 WARN retry.RetryInvocationHandler: Exception while invoking
> getFileInfo of class
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB after 0
> fail over attempts. Trying to fail over immediately.
> {noformat}
> I'm going to remove this warning in HDFS-2918 since we shouldn't warn every
> time a client performs an operation, eg could be weeks after the failover.
> But we should eg log a warning eg the client first does a failover.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira