[
https://issues.apache.org/jira/browse/HBASE-4025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13054590#comment-13054590
]
Subbu M Iyer commented on HBASE-4025:
-------------------------------------
May be we should create all tables under /hbase/tables/<table name> instead of
/hbase/<table name> so that we can avoid future cases where we create other
folder under /hbase such as .logs,.corrupt et al that does not contain table
descriptors?
So, this pattern will have some thing like:
/hbase/.logs
/hbase/.corrupt
/hbase/.oldlogs
/hbase/.META.
/hbase/-ROOT-
/hbase/<future non user system folders>
/hbase/UserTables/<user table folder>/.tableinfo
and when we need to retrieve all the table descriptors we simply iterate over
the /hbase/UserTables folder rather than the /hbase and ignore all system
folders.
Other option would be:
/hbase/System/.logs, .oldlogs, .corrupt et al.
/hbase/UserTables/<user tables>
This way we can avoid adding a band-aid fix to this read table descriptor logic
every time we have a new system folder.
thoughts?
> Server startup fails during startup due to failure in loading all table
> descriptors. We should ignore .logs,.oldlogs,.corrupt,.META.,-ROOT- folders
> while reading descriptors
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HBASE-4025
> URL: https://issues.apache.org/jira/browse/HBASE-4025
> Project: HBase
> Issue Type: Bug
> Affects Versions: 0.92.0
> Reporter: Subbu M Iyer
> Attachments:
> HBASE-4025_-_Server_startup_fails_while_reading_table_descriptor_from__corrupt_folder_1.patch
>
> Original Estimate: 2h
> Remaining Estimate: 2h
>
> 2011-06-23 21:39:52,524 WARN org.apache.hadoop.hbase.monitoring.TaskMonitor:
> Status org.apache.hadoop.hbase.monitoring.MonitoredTaskImpl@2f56f920 appears
> to have been leaked
> 2011-06-23 21:40:06,465 WARN org.apache.hadoop.hbase.master.HMaster: Failed
> getting all descriptors
> java.io.FileNotFoundException: No status for
> hdfs://ciq.com:9000/hbase/.corrupt
> at
> org.apache.hadoop.hbase.util.FSUtils.getTableInfoModtime(FSUtils.java:888)
> at
> org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:122)
> at
> org.apache.hadoop.hbase.util.FSTableDescriptors.getAll(FSTableDescriptors.java:149)
> at
> org.apache.hadoop.hbase.master.HMaster.getHTableDescriptors(HMaster.java:1442)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:340)
> at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1138)
> 2011-06-23 21:40:26,790 WARN org.apache.hadoop.hbase.master.HMaster: Failed
> getting all descriptors
> java.io.FileNotFoundException: No status for
> hdfs://ciq.com:9000/hbase/.corrupt
> at
> org.apache.hadoop.hbase.util.FSUtils.getTableInfoModtime(FSUtils.java:888)
> at
> org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:122)
> at
> org.apache.hadoop.hbase.util.FSTableDescriptors.getAll(FSTableDescriptors.java:149)
> at
> org.apache.hadoop.hbase.master.HMaster.getHTableDescriptors(HMaster.java:1442)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:340)
> at
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1138)
--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira