[ 
https://issues.apache.org/jira/browse/HDFS-528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12868553#action_12868553
 ] 

Todd Lipcon commented on HDFS-528:
----------------------------------

Hi Dhruba,

It seems you agreed with the original premise of the issue, for the reason of 
avoiding replication storms early in NN startup. Certainly you can do this with 
an external tool, force it to always start in safemode, and manually (through 
the tool) kicking it out of safemode when you're ready. But why not let Hadoop 
do this for us?

The reason of the error message I see as a bonus on top of a feature which is 
generally useful, and which I dont think we should force every operator to 
write when it's so simple to integrate into the existing feature. FWIW this 
code has been shipping with CDH for 8 months with no issues.

> Add ability for safemode to wait for a minimum number of live datanodes
> -----------------------------------------------------------------------
>
>                 Key: HDFS-528
>                 URL: https://issues.apache.org/jira/browse/HDFS-528
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: scripts
>    Affects Versions: 0.22.0
>            Reporter: Todd Lipcon
>            Assignee: Todd Lipcon
>         Attachments: hdfs-528-v2.txt, hdfs-528-v3.txt, hdfs-528.txt, 
> hdfs-528.txt
>
>
> When starting up a fresh cluster programatically, users often want to wait 
> until DFS is "writable" before continuing in a script. "dfsadmin -safemode 
> wait" doesn't quite work for this on a completely fresh cluster, since when 
> there are 0 blocks on the system, 100% of them are accounted for before any 
> DNs have reported.
> This JIRA is to add a command which waits until a certain number of DNs have 
> reported as alive to the NN.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to