guyuqi commented on PR #913:
URL: https://github.com/apache/bigtop/pull/913#issuecomment-1156060230

   @iwasakims Thanks for your comments.
   Just as you mentioned,  users could overwrite the existing fsimage by `hdfs 
namenode -format -force` or `-upgrade`, but not all new comers would know how 
to handle it when the issues occurred like: 
   ```
   Failed to start namenode.
   java.io.IOException:
   File system image contains an old layout version -63.
   An upgrade to version -65 is required.
   .....
   ...
   ```
   
   Hdfs is the distributed file system to make data safe.  `Data replication 
`and `secondarynamenode` would help recover one node from the other nodes of 
the same cluster if fsimage and edit files were deleted.   (Is it really 
dangerous?)
   From my perspective, if users remove(`$1 = 0` here, not upgrade) the Hadoop 
packages from one node, it seems there is no necessary to retain the deprecated 
data. 
   
   On the other hand, it's convennient for users to deployment different Hadoop 
verisons by Mpack (or other automatic deployment tools) if we removed 
deprecated Hadoop fs-image after RPM/deb uninstallation.
   
   Is it the institutionalized acceptance that we don't touch `/hadoop/hdfs` 
even if there would be a obstacle to new comers?
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to