[
https://issues.apache.org/jira/browse/HADOOP-3758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613472#action_12613472
]
Raghu Angadi commented on HADOOP-3758:
--------------------------------------
Thats pretty painful. We should include "IncorrectVersionException" as one of
the fatal exceptions at the datanode.
See {{DataNode.java:offserService()}} :
{noformat}
} catch(RemoteException re) {
String reClass = re.getClassName();
if (UnregisteredDatanodeException.class.getName().equals(reClass) ||
DisallowedDatanodeException.class.getName().equals(reClass)) {
LOG.warn("DataNode is shutting down: " +
StringUtils.stringifyException(re));
shutdown();
return;
}
{noformat}
> Excessive exceptions in HDFS namenode log file
> ----------------------------------------------
>
> Key: HADOOP-3758
> URL: https://issues.apache.org/jira/browse/HADOOP-3758
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.17.1
> Reporter: Jim Huang
>
> I upgraded a big cluster, out of which 10 nodes did not get upgraded.
> The namenode log showed excessive exceptions, causing the namenode log to ate
> the entire partition space, in this case close to 700GB log file was
> generated on the namenode.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.