[
https://issues.apache.org/jira/browse/HDFS-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13040682#comment-13040682
]
Todd Lipcon commented on HDFS-1954:
-----------------------------------
Hey Suresh. I agree that this is not the most common case for large existing
clusters. But, people running large existing clusters already know the above,
and shouldn't be confused by the message. The thinking is that "hint" type
messages ought to be directed towards new users, since they're the ones who
don't have the operational experience to know better.
Do you have an alternative patch that would satisfy both the new users and the
big cluster operators?
> Improve corrupt files warning message
> -------------------------------------
>
> Key: HDFS-1954
> URL: https://issues.apache.org/jira/browse/HDFS-1954
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: philo vivero
> Assignee: Patrick Hunt
> Fix For: 0.22.0
>
> Attachments: HDFS-1954.patch, HDFS-1954.patch
>
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> On NameNode web interface, you may get this warning:
> WARNING : There are about 32 missing blocks. Please check the log or run
> fsck.
> If the cluster was started less than 14 days before, it would be great to
> add: "Is dfs.data.dir defined?"
> If at the point of that error message, that parameter could be checked, and
> error made "OMG dfs.data.dir isn't defined!" that'd be even better. As is,
> troubleshooting undefined parameters is a difficult proposition.
> I suspect this is an easy fix.
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira