[ 
https://issues.apache.org/jira/browse/HBASE-3820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jieshan Bean updated HBASE-3820:
--------------------------------

    Attachment: HBASE-3820-90-V3.patch

To narrow the scope of this issue, I modified the patch:
1.Remain the new adding method to check the dfs safemode.
2.Before the splitting, make sure the dfs is not in safemode. If it's in 
safemode, just wait till it out of that.
3.Maybe,the dfs would translate into safemode while splitting, an IOException 
will throwed out. HMaster will catch that. Then checks dfs again.

> Splitlog() executed while the namenode was in safemode may cause data-loss
> --------------------------------------------------------------------------
>
>                 Key: HBASE-3820
>                 URL: https://issues.apache.org/jira/browse/HBASE-3820
>             Project: HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.90.2
>            Reporter: Jieshan Bean
>             Fix For: 0.90.4
>
>         Attachments: HBASE-3820-90-V3.patch, HBASE-3820-MFSFix-90-V2.patch, 
> HBASE-3820-MFSFix-90.patch
>
>
> I found this problem while the namenode went into safemode due to some 
> unclear reasons. 
> There's one patch about this problem:
>    try {
>       HLogSplitter splitter = HLogSplitter.createLogSplitter(
>         conf, rootdir, logDir, oldLogDir, this.fs);
>       try {
>         splitter.splitLog();
>       } catch (OrphanHLogAfterSplitException e) {
>         LOG.warn("Retrying splitting because of:", e);
>         // An HLogSplitter instance can only be used once.  Get new instance.
>         splitter = HLogSplitter.createLogSplitter(conf, rootdir, logDir,
>           oldLogDir, this.fs);
>         splitter.splitLog();
>       }
>       splitTime = splitter.getTime();
>       splitLogSize = splitter.getSize();
>     } catch (IOException e) {
>       checkFileSystem();
>       LOG.error("Failed splitting " + logDir.toString(), e);
>       master.abort("Shutting down HBase cluster: Failed splitting hlog 
> files...", e);
>     } finally {
>       this.splitLogLock.unlock();
>     }
> And it was really give some useful help to some extent, while the namenode 
> process exited or been killed, but not considered the Namenode safemode 
> exception.
>    I think the root reason is the method of checkFileSystem().
>    It gives out an method to check whether the HDFS works normally(Read and 
> write could be success), and that maybe the original propose of this method. 
> This's how this method implements:
>     DistributedFileSystem dfs = (DistributedFileSystem) fs;
>     try {
>       if (dfs.exists(new Path("/"))) {  
>         return;
>       }
>     } catch (IOException e) {
>       exception = RemoteExceptionHandler.checkIOException(e);
>     }
>    
>    I have check the hdfs code, and learned that while the namenode was in 
> safemode ,the dfs.exists(new Path("/")) returned true. Because the file 
> system could provide read-only service. So this method just checks the dfs 
> whether could be read. I think it's not reasonable.
>     
>    

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to