anoopsjohn commented on a change in pull request #2113:
URL: https://github.com/apache/hbase/pull/2113#discussion_r464896310



##########
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/InitMetaProcedure.java
##########
@@ -71,7 +71,11 @@ private static void writeFsLayout(Path rootDir, 
Configuration conf) throws IOExc
     LOG.info("BOOTSTRAP: creating hbase:meta region");
     FileSystem fs = rootDir.getFileSystem(conf);
     Path tableDir = CommonFSUtils.getTableDir(rootDir, 
TableName.META_TABLE_NAME);
-    if (fs.exists(tableDir) && !fs.delete(tableDir, true)) {
+    boolean removeMeta = conf.getBoolean(HConstants.REMOVE_META_ON_RESTART,

Review comment:
       In case of HM start and the bootstrap we create the ClusterID and write 
to FS and then to zk and then create the META table FS layout.   So in a 
cluster recreate, we will see clusterID is there in FS and also the META  FS 
layout but no clusterID in zk. Ya seems we can use this as indication for 
cluster recreate over existing data.   In HM start, this is some thing we need 
to check at 1st itself and track.  If this mode is true, later when (if) we do 
INIT_META_WRITE_FS_LAYOUT , we should not delete the META dir.   As part of the 
Bootstrap when we write that proc to MasterProcWal, we can include this mode 
(boolean) info also.  This is a protobuf message anyways.  So even if this HM 
got killed and restarted (at a point where the clusterId was written to zk but 
the Meta FS layout part was not reached) we can use the info added as part of 
the bootstrap wal entry and make sure NOT to delete the meta dir.
   Can we do this part alone in a sub task and a provide a patch pls?  This is 
very key part.. That is why better we can fine tune this with all diff 
testcases. Sounds good?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to