[
https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14035109#comment-14035109
]
Kashif J S commented on HBASE-10933:
------------------------------------
Thanks for the review [~lhofhansl] & [~jmhsieh] .
bq. This specific method isn't correct. if we have a key "abc", the next key is
actually "abc\x0", not "abd". Rename the method or fix the code. Please add
unit test to demonstrate expected behavior.
// In HBase the end key is always exclusive, hence the next key returns the
next exclusive key. If you suggest, maybe we can rename to nextExclusiveKey or
nextEndKey.... nextKey apparently seemed fine to me. Let me know your call
This is added and only used once in the code. Why make all the generic code
when you only need the more specific code? Please remove.
// I just thought this can be a generic code. You never know it could be
required later in the development. If still not required, I will remove it now ?
> hbck -fixHdfsOrphans is not working properly it throws null pointer exception
> -----------------------------------------------------------------------------
>
> Key: HBASE-10933
> URL: https://issues.apache.org/jira/browse/HBASE-10933
> Project: HBase
> Issue Type: Bug
> Components: hbck
> Affects Versions: 0.94.16, 0.98.2
> Reporter: Deepak Sharma
> Assignee: Kashif J S
> Priority: Critical
> Fix For: 0.99.0, 0.94.21
>
> Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch,
> HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt,
> TestResults-trunk.txt
>
>
> if we regioninfo file is not existing in hbase region then if we run hbck
> repair or hbck -fixHdfsOrphans
> then it is not able to resolve this problem it throws null pointer exception
> {code}
> 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck
> (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs
> dir:
> hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950
> java.lang.NullPointerException
> at
> org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939)
> at
> org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497)
> at
> org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471)
> at
> org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591)
> at
> org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369)
> at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447)
> at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769)
> at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587)
> at
> com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244)
> at
> com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84)
> at junit.framework.TestCase.runBare(TestCase.java:132)
> at junit.framework.TestResult$1.protect(TestResult.java:110)
> at junit.framework.TestResult.runProtected(TestResult.java:128)
> at junit.framework.TestResult.run(TestResult.java:113)
> at junit.framework.TestCase.run(TestCase.java:124)
> at junit.framework.TestSuite.runTest(TestSuite.java:243)
> at junit.framework.TestSuite.run(TestSuite.java:238)
> at
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at
> org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
> at
> org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
> at
> org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
> {code}
> problem i got it is because since in HbaseFsck class
> {code}
> private void adoptHdfsOrphan(HbckInfo hi)
> {code}
> we are intializing tableinfo using SortedMap<String, TableInfo> tablesInfo
> object
> {code}
> TableInfo tableInfo = tablesInfo.get(tableName);
> {code}
> but in private SortedMap<String, TableInfo> loadHdfsRegionInfos()
> {code}
> for (HbckInfo hbi: hbckInfos) {
> if (hbi.getHdfsHRI() == null) {
> // was an orphan
> continue;
> }
> {code}
> we have check if a region is orphan then that table will can not be added in
> SortedMap<String, TableInfo> tablesInfo
> so later while using this we get null pointer exception
--
This message was sent by Atlassian JIRA
(v6.2#6252)