[ https://issues.apache.org/jira/browse/HBASE-537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12581004#action_12581004 ]
stack commented on HBASE-537: ----------------------------- This may be a blocker; those with big clusters will be annoyed having to manually wait on exit of safe mode before starting hbase. They can put in the way of the hbase startup, invocation of ./bin/hadoop dfsadmin --safemode wait. In the past, it was suggested that hbase come up anyways, even if in safe mode in hdfs and that the UI and logs show us as blocked waiting on hdfs to exit safe mode. Shouldn't be hard to do. Might be our only alternative since hadoop is now elsewhere. Could do the following pseudo-code in HMaster constructor: {code} ... Path rootRegionDir = HRegion.getRegionDir(rootdir, HRegionInfo.rootRegionInfo); LOG.info("Root region dir: " + rootRegionDir.toString()); try { // Before we make our first fs access, check to see if hdfs is in safe // mode and park here until it exists. if (ishdfs(this.conf)) { DFSAdmin admin = new DFSAdmin(this.conf); if (admin.isInSafeMode()) { admin.parkUntilWeExitSafeModeLoggingMessageEverySoOften(); } } ... Go on to create -ROOT- and .META. if do not exist, etc. {code} Would need to move the startup of the master info server before this code with its own query about state of hdfs. > We no longer wait on hdfs to exit safe mode > ------------------------------------------- > > Key: HBASE-537 > URL: https://issues.apache.org/jira/browse/HBASE-537 > Project: Hadoop HBase > Issue Type: Bug > Reporter: stack > > We used wait on hdfs to exit safe mode before going on to startup hbase but > this feature is broken since we moved out of hadoop contrib. Now when you > try start with hdfs in safe mode you get: > {code} > 08/03/21 04:39:56 FATAL hbase.HMaster: Not starting HMaster because: > org.apache.hadoop.ipc.RemoteException: > org.apache.hadoop.dfs.SafeModeException: Cannot create directory /hbase010. > Name node is in safe mode. > Safe mode will be turned off automatically. > at > org.apache.hadoop.dfs.FSNamesystem.mkdirsInternal(FSNamesystem.java:1571) > at org.apache.hadoop.dfs.FSNamesystem.mkdirs(FSNamesystem.java:1559) > at org.apache.hadoop.dfs.NameNode.mkdirs(NameNode.java:422) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > {code} > If you are lucky, it appears on STDOUT/ERR but may just be stuffed into logs > and all looks like its running properly. > Noticed first by Lars George. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.