[jira] [Commented] (HBASE-5874) When 'fs.default.name' not configured, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13293235#comment-13293235 ] fulin wang commented on HBASE-5874: --- Thanks Hudson for your attention. When 'fs.default.name' not configured, the hbck tool and Merge tool throw IllegalArgumentException. --- Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Assignee: fulin wang Fix For: 0.90.7, 0.96.0, 0.94.1, 0.92.3 Attachments: HBASE-5874-0.90-v2.patch, HBASE-5874-0.90.patch, HBASE-5874-trunk-v2.patch, HBASE-5874-trunk-v3.patch, HBASE-5874-trunk.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13271229#comment-13271229 ] fulin wang commented on HBASE-5874: --- Thank you for attention. 1. the hbck tool, I think it need to add 'fs.default.name' attribute. 2. the merge tool, I will modify help, because it is error now. The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-5874-0.90.patch, HBASE-5874-trunk.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-5874: -- Attachment: HBASE-5874-trunk-v2.patch HBASE-5874-0.90-v2.patch The TestHBaseFsck and TestMergeTool have tesed. The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-5874-0.90-v2.patch, HBASE-5874-0.90.patch, HBASE-5874-trunk-v2.patch, HBASE-5874-trunk.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-5874: -- Attachment: HBASE-5874-trunk-v3.patch Thank Jonathan Hsieh review. By your suggest, I make a patch, Please review. If you think the patch has problem, Please modify, Thanks. The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-5874-0.90-v2.patch, HBASE-5874-0.90.patch, HBASE-5874-trunk-v2.patch, HBASE-5874-trunk-v3.patch, HBASE-5874-trunk.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13266292#comment-13266292 ] fulin wang commented on HBASE-5874: --- First the community version think this is issue, it need to make other branch patch. The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-5874-0.90.patch, HBASE-5874-trunk.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13264207#comment-13264207 ] fulin wang commented on HBASE-5874: --- I have tested, It pass. There is some reason for add 'fs.default.name' attribute. 1) It make hbck tool and Merge tool easy to use. 2) We should supported old version, you can reference the MasterFileSystem class. I suggested to add the attribute. The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-5874-0.90.patch, HBASE-5874-trunk.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-5874: -- Attachment: HBASE-5874-trunk.patch The TestHBaseFsck and TestMergeTool test case of trunk, I have tested. The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Attachments: HBASE-5874-0.90.patch, HBASE-5874-trunk.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
fulin wang created HBASE-5874: - Summary: The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Reporter: fulin wang The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-5874: -- Affects Version/s: 0.90.6 The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-5874: -- Attachment: (was: HBASE-5874-0.90.patch) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Attachments: HBASE-5874-0.90.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-5874) The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException.
[ https://issues.apache.org/jira/browse/HBASE-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-5874: -- Attachment: HBASE-5874-0.90.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. Key: HBASE-5874 URL: https://issues.apache.org/jira/browse/HBASE-5874 Project: HBase Issue Type: Bug Components: hbck Affects Versions: 0.90.6 Reporter: fulin wang Attachments: HBASE-5874-0.90.patch The HBase do not configure the 'fs.default.name' attribute, the hbck tool and Merge tool throw IllegalArgumentException. the hbck tool and Merge tool, we should add 'fs.default.name' attriubte to the code. hbck exception: Exception in thread main java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.init(ChecksumFileSystem.java:128) at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:301) at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:489) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegioninfo(HBaseFsck.java:565) at org.apache.hadoop.hbase.util.HBaseFsck.loadHdfsRegionInfos(HBaseFsck.java:596) at org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:332) at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:360) at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:2907) Merge exception: [2012-05-05 10:48:24,830] [ERROR] [main] [org.apache.hadoop.hbase.util.Merge 381] exiting due to error java.lang.IllegalArgumentException: Wrong FS: hdfs://160.176.0.101:9000/hbase/.META./1028785192/.regioninfo, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:412) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:59) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:382) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:285) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:823) at org.apache.hadoop.hbase.regionserver.HRegion.checkRegioninfoOnFilesystem(HRegion.java:415) at org.apache.hadoop.hbase.regionserver.HRegion.initialize(HRegion.java:340) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2679) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2665) at org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:2634) at org.apache.hadoop.hbase.util.MetaUtils.openMetaRegion(MetaUtils.java:276) at org.apache.hadoop.hbase.util.MetaUtils.scanMetaRegion(MetaUtils.java:261) at org.apache.hadoop.hbase.util.Merge.run(Merge.java:115) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.hadoop.hbase.util.Merge.main(Merge.java:379) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4124) ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'.
[ https://issues.apache.org/jira/browse/HBASE-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13084780#comment-13084780 ] fulin wang commented on HBASE-4124: --- Please gaojinchao fix the issues, Thanks. ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Key: HBASE-4124 URL: https://issues.apache.org/jira/browse/HBASE-4124 Project: HBase Issue Type: Bug Components: master Reporter: fulin wang Attachments: log.txt Original Estimate: 0.4h Remaining Estimate: 0.4h ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Issue: The RS failed besause of 'already online on this server' and return; The HM can not receive the message and report 'Regions in transition timed out'. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (HBASE-4124) ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'.
[ https://issues.apache.org/jira/browse/HBASE-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070342#comment-13070342 ] fulin wang commented on HBASE-4124: --- I can't find where does it call getRegionsInTransitionInRS().add()? So I do not understand why add this function. About 'already online on this server' of error, I want that the region should be closed or reassinged. I am trying to make a patch. ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Key: HBASE-4124 URL: https://issues.apache.org/jira/browse/HBASE-4124 Project: HBase Issue Type: Bug Components: master Reporter: fulin wang Attachments: log.txt Original Estimate: 0.4h Remaining Estimate: 0.4h ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Issue: The RS failed besause of 'already online on this server' and return; The HM can not receive the message and report 'Regions in transition timed out'. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HBASE-4124) ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'.
ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Key: HBASE-4124 URL: https://issues.apache.org/jira/browse/HBASE-4124 Project: HBase Issue Type: Bug Components: master Reporter: fulin wang ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Issue: The RS failed besause of 'already online on this server' and return; The HM can not receive the message and report 'Regions in transition timed out'. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4124) ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'.
[ https://issues.apache.org/jira/browse/HBASE-4124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-4124: -- Attachment: log.txt The error log. ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Key: HBASE-4124 URL: https://issues.apache.org/jira/browse/HBASE-4124 Project: HBase Issue Type: Bug Components: master Reporter: fulin wang Attachments: log.txt Original Estimate: 0.4h Remaining Estimate: 0.4h ZK restarted while assigning a region, new active HM re-assign it but the RS warned 'already online on this server'. Issue: The RS failed besause of 'already online on this server' and return; The HM can not receive the message and report 'Regions in transition timed out'. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-4093: -- Attachment: HBASE-4093-trunk_V3.patch HBASE-4093-0.90_V3.patch According to review, I made two patches. Please check, Thanks. When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read.
[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13066984#comment-13066984 ] fulin wang commented on HBASE-4093: --- I'm not sure the metod of verifyAndAssignRoot execution time, So it can not using a fixed time. The 'hbase.catalog.verification.retries' retries times. When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 6
[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13067457#comment-13067457 ] fulin wang commented on HBASE-4093: --- This is a protection when the system is fault state. When is 'this.data' of blockUntilAvailable null, The verifyAndAssignRoot() would be wait. The 'this.data' is not null, It is not wait. This issue haapened the verifyRegionLocation() method, the exception is SocketTimeoutException, So I think that sleep one second and retry five times, Try to handle this fault state. When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946
[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13067480#comment-13067480 ] fulin wang commented on HBASE-4093: --- I think that the 'hbase.catalog.verification.times' is not must. It retries times. If 5 retries failed, we should restart Hmaster. can we remove the 'hbase.catalog.verification.times'? If you agree, I will make a patch. Thanks. When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020
[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13067488#comment-13067488 ] fulin wang commented on HBASE-4093: --- Yes, I write error, it is 'hbase.catalog.verification.retries'. can you give it a reasonable value? I wonder that 5 times is enough. When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while
[jira] [Commented] (HBASE-4093) When verifyAndAssignRoot throws exception, the deadServers state cannot be changed
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13067512#comment-13067512 ] fulin wang commented on HBASE-4093: --- Thanks, Ted Yu. When verifyAndAssignRoot throws exception, the deadServers state cannot be changed -- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Assignee: fulin wang Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-0.90_V3.patch, HBASE-4093-trunk_V2.patch, HBASE-4093-trunk_V3.patch, surefire-report.html Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721
[jira] [Updated] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-4093: -- Attachment: surefire-report.html HBASE-4093-trunk_V2.patch HBASE-4093-0.90_V2.patch I make two patches for 0.90 and trunk. Unit testing has passed.Please check, Thanks. When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Attachments: HBASE-4093-0.90.patch, HBASE-4093-0.90_V2.patch, HBASE-4093-trunk_V2.patch, surefire-report.html Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch :
[jira] [Updated] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
[ https://issues.apache.org/jira/browse/HBASE-4093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fulin wang updated HBASE-4093: -- Attachment: HBASE-4093-0.90.patch The verifyAndAssignRoot failed many times after restart Hmaster. When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang Attachments: HBASE-4093-0.90.patch Original Estimate: 8h Remaining Estimate: 8h When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020] at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802) at
[jira] [Created] (HBASE-4093) When verifyAndAssignRoot throw exception, The deadServers state can not be changed.
When verifyAndAssignRoot throw exception, The deadServers state can not be changed. --- Key: HBASE-4093 URL: https://issues.apache.org/jira/browse/HBASE-4093 Project: HBase Issue Type: Bug Components: master Affects Versions: 0.90.3 Reporter: fulin wang When verifyAndAssignRoot throw exception, The deadServers state can not be changed. The Hmaster log has a lot of 'Not running balancer because processing dead regionserver(s): []' information. HMaster log: 2011-07-09 01:38:31,820 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Closed path hdfs://162.2.16.6:9000/hbase/Htable_UFDR_035/fe7e51c0a74fac096cea8cdb3c9497a6/recovered.edits/00204525422 (wrote 8 edits in 61583ms) 2011-07-09 01:38:31,836 ERROR org.apache.hadoop.hbase.master.MasterFileSystem: Failed splitting hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056 java.io.IOException: hdfs://162.2.16.6:9000/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352, entryStart=1878997244, pos=1879048192, end=2003890606, edit=80274 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.addFileInfoToException(SequenceFileLogReader.java:244) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:200) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:172) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.parseHLog(HLogSplitter.java:429) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:262) at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.java:188) at org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:201) at org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:114) at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:156) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.io.IOException: Could not obtain block: blk_1310107715558_225636 file=/hbase/.logs/162-2-6-187,20020,1310107719056/162-2-6-187%3A20020.1310143885352 at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:2491) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:2256) at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:2441) at java.io.DataInputStream.read(DataInputStream.java:132) at java.io.DataInputStream.readFully(DataInputStream.java:178) at org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63) at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1984) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1884) at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:1930) at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.next(SequenceFileLogReader.java:198) ... 10 more 2011-07-09 01:38:33,052 DEBUG org.apache.hadoop.hbase.master.HMaster: Not running balancer because processing dead regionserver(s): [162-2-6-187,20020,1310107719056] 2011-07-09 01:39:29,946 WARN org.apache.hadoop.hbase.master.CatalogJanitor: Failed scan of catalog table java.net.SocketTimeoutException: Call to /162.2.6.187:20020 failed on socket timeout exception: java.net.SocketTimeoutException: 6 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/162.2.6.187:38721 remote=/162.2.6.187:20020] at org.apache.hadoop.hbase.ipc.HBaseClient.wrapException(HBaseClient.java:802) at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:775) at org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257) at $Proxy6.getRegionInfo(Unknown Source) at org.apache.hadoop.hbase.catalog.CatalogTracker.verifyRegionLocation(CatalogTracker.java:424) at org.apache.hadoop.hbase.catalog.CatalogTracker.getMetaServerConnection(CatalogTracker.java:272)