[ https://issues.apache.org/jira/browse/HDFS-13079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Shashikant Banerjee updated HDFS-13079: --------------------------------------- Attachment: HDFS-13079.003.patch > Provide a config to start namenode in safemode state upto a certain > transaction id > ---------------------------------------------------------------------------------- > > Key: HDFS-13079 > URL: https://issues.apache.org/jira/browse/HDFS-13079 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Reporter: Mukul Kumar Singh > Assignee: Shashikant Banerjee > Priority: Major > Attachments: HDFS-13079.001.patch, HDFS-13079.002.patch, > HDFS-13079.003.patch > > > In some cases it necessary to rollback the Namenode back to a certain > transaction id. This is especially needed when the user issues a {{rm -Rf > -skipTrash}} by mistake. > Rolling back to a transaction id helps in taking a peek at the filesystem at > a particular instant. This jira proposes to provide a configuration variable > using which the namenode can be started upto a certain transaction id. The > filesystem will be in a readonly safemode which cannot be overridden > manually. It will only be overridden by removing the config value from the > config file. Please also note that this will not cause any changes in the > filesystem state, the filesystem will be in safemode state and no changes to > the filesystem state will be allowed. > Please note that in case a checkpoint has already happened and the requested > transaction id has been subsumed in an FSImage, then the namenode will be > started with the next nearest transaction id. Further FSImage files and edits > will be ignored. > If the checkpoint hasn't happen then the namenode will be started with the > exact transaction id. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org