Basically, if the datanodes crushed or did not stooped gracefully then it is 
not a big deal as the data is still inside them and the location of where are 
all the block files are is on the namenode (metadata).
Thus for that, I won't be worry and you can always kill them with kill command 
based on the process name (use jps).
When the namenode crush it is much more tragic but the metadata would stay on 
the output directory (that you should have written as part of the cluster setup 
in dfs.namenode.name.dir hdfs-site.xml) with all the checkpoint files.
start-dfs.sh doesn't work to initialize the namenode, correct?

On 2018/10/16 17:48:34, Atul Rajan <atul.raja...@gmail.com> wrote: 
> Hello community,
> 
> My cluster was up till last time since today my namenode is suddenly turned 
> off and when i am stopping n starting again datanodes are not stopping 
> gracefully 
> Can you please guide me how to bring up the namenode from CLI 
> 
> Sent from my iPhone
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: user-h...@hadoop.apache.org
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org
For additional commands, e-mail: user-h...@hadoop.apache.org

Reply via email to