[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14642593#comment-14642593 ] Ashish Singhi commented on HBASE-10933: --- Was this issue really fixed ? I do not see any code change for this committed. > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14324467#comment-14324467 ] Andrew Purtell commented on HBASE-10933: Why was this reopened [~deepakhuawei] ? > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif >Priority: Critical > Fix For: 2.0.0 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14050817#comment-14050817 ] Jonathan Hsieh commented on HBASE-10933: Move that method out Bytes and making it private is sufficient for me. Unit test would be great. Thanks! > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.22 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14044401#comment-14044401 ] Kashif J S commented on HBASE-10933: Thanks [~jmhsieh] :) Alright I will modify as per your comments/concerns. Maybe I will rename the nextkey method to something as "nextByte". Maybe more meaningful for the value it returns. And other methods I will hide by making private. //What do you mean by exclusive? – in the [a, b), [b, c) sense where b is in the second but not the first? If we did next we'd still be able to sneak 'abc\x0' betwen 'abc' and 'abd' Yes... if region startkey and endkey is 'abc' and 'abd', If we did next we will be able to sneak 'abc\x0'. So nextKey maybe a wrong method name. I will rename method to nextByte Also I will add Unit test cases for the same. > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.22 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14042283#comment-14042283 ] Jonathan Hsieh commented on HBASE-10933: What do you mean by exclusive? -- in the [a, b), [b, c) sense where b is in the second but not the first? If we did next we'd still be able to sneak 'abc\x0' betwen 'abc' and 'abd' Taking a step back, my main concern is that this nextKey is a public method in Bytes -- a common util class used internally and by users. If a call lives there, the api and its semantics must be clear and correct in all cases. If you made this a private method in HbaseFsck, (thus only scoped to that tool), I wouldn't be concerned. bq. // I just thought this can be a generic code. You never know it could be required later in the development. If still not required, I will remove it now ? Please remove and just keep the specific version. Though "you never know", it tends to be "you never need" and makes the code harder to read (or requires the reader to read more code). Because this is in the public api, I'm more concerned. > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.22 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14035109#comment-14035109 ] Kashif J S commented on HBASE-10933: Thanks for the review [~lhofhansl] & [~jmhsieh] . bq. This specific method isn't correct. if we have a key "abc", the next key is actually "abc\x0", not "abd". Rename the method or fix the code. Please add unit test to demonstrate expected behavior. // In HBase the end key is always exclusive, hence the next key returns the next exclusive key. If you suggest, maybe we can rename to nextExclusiveKey or nextEndKey nextKey apparently seemed fine to me. Let me know your call This is added and only used once in the code. Why make all the generic code when you only need the more specific code? Please remove. // I just thought this can be a generic code. You never know it could be required later in the development. If still not required, I will remove it now ? > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.21 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14032661#comment-14032661 ] Jonathan Hsieh commented on HBASE-10933: [~kashi...@huawei.com], Here's a quick review: Nice tests. I have a few style suggestions/fixes. This specific method isn't correct. if we have a key "abc", the next key is actually "abc\x0", not "abd". Rename the method or fix the code. Please add unit test to demonstrate expected behavior. {code} + + public static boolean nextKey(byte[] key, int offset, int length) { +if (length == 0) { + return false; +} +int i = offset + length - 1; +while (key[i] == -1) { + key[i] = 0; + i--; + if (i < offset) { +return false; + } +} +key[i] = (byte) (key[i] + 1); +return true; + } + {code} This is added and only used once in the code. Why make all the generic code when you only need the more specific code? Please remove. {code} + /** + * Increment the key to the next key + * @param key the key to increment + * @return a new byte array with the next key or null if the key could not be incremented because + * it's already at its max value. + */ + public static byte[] nextKey(byte[] key) { +byte[] nextStartRow = new byte[key.length]; +System.arraycopy(key, 0, nextStartRow, 0, key.length); +if (!nextKey(nextStartRow, nextStartRow.length)) { + return null; +} +return nextStartRow; + } + + /** + * Increment the key in-place to the next key + * @param key the key to increment + * @param length the length of the key + * @return true if the key can be incremented and false otherwise if the key is at its max value. + */ + public static boolean nextKey(byte[] key, int length) { +return nextKey(key, 0, length); + } +{ {code} Nice catch! {code} === --- src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java (revision 1596187) +++ src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java (working copy) @@ -597,14 +597,14 @@ FileSystem fs = p.getFileSystem(getConf()); FileStatus[] dirs = fs.listStatus(p); if (dirs == null) { - LOG.warn("Attempt to adopt ophan hdfs region skipped becuase no files present in " + + LOG.warn("Attempt to adopt orphan hdfs region skipped because no files present in " + p + ". This dir could probably be deleted."); return ; } String tableName = Bytes.toString(hi.getTableName()); TableInfo tableInfo = tablesInfo.get(tableName); -Preconditions.checkNotNull("Table " + tableName + "' not present!", tableInfo); +Preconditions.checkNotNull(tableInfo, "Table " + tableName + "' not present!"); HTableDescriptor template = tableInfo.getHTD(); {code} Style issue. Just pass in the list of split keys instead of having the boolean withSplits arg? Then if the splits are null or empty list it is the base case with one region/withSplits=false. Remove boolean wlthSpilts and add rowkeys null/empty list check instead. {code} * @throws InterruptedException * @throws KeeperException */ - HTable setupTable(String tablename) throws Exception { + HTable setupTable(String tablename, boolean withSplits, byte[][] rowkeys) throws Exception { HTableDescriptor desc = new HTableDescriptor(tablename); HColumnDescriptor hcd = new HColumnDescriptor(Bytes.toString(FAM)); desc.addFamily(hcd); // If a table has no CF's it doesn't get checked -TEST_UTIL.getHBaseAdmin().createTable(desc, SPLITS); +if (withSplits) { + TEST_UTIL.getHBaseAdmin().createTable(desc, SPLITS); +} else { + TEST_UTIL.getHBaseAdmin().createTable(desc); +} tbl = new HTable(TEST_UTIL.getConfiguration(), tablename); List puts = new ArrayList(); -for (byte[] row : ROWKEYS) { +for (byte[] row : rowkeys) { Put p = new Put(row); p.add(FAM, Bytes.toBytes("val"), row); puts.add(p); @@ -871,7 +882,7 @@ {code} > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.21 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14031474#comment-14031474 ] Lars Hofhansl commented on HBASE-10933: --- Looks good upon coarse inspection. [~jmhsieh], mind having a look? > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.21 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14030685#comment-14030685 ] Hadoop QA commented on HBASE-10933: --- {color:green}+1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12650294/HBASE-10933-trunk-v2.patch against trunk revision . ATTACHMENT ID: 12650294 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 3 new or modified tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 javadoc{color}. The javadoc tool did not generate any warning messages. {color:green}+1 findbugs{color}. The patch does not introduce any new Findbugs (version 1.3.9) warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:green}+1 lineLengths{color}. The patch does not introduce lines longer than 100 {color:green}+1 site{color}. The mvn site goal succeeds with this patch. {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//testReport/ Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html Findbugs warnings: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9767//console This message is automatically generated. > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.21 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, HBASE-10933-trunk-v2.patch, TestResults-0.94.txt, > TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.s
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018475#comment-14018475 ] Hadoop QA commented on HBASE-10933: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12646009/HBASE-10933-0.94-v2.patch against trunk revision . ATTACHMENT ID: 12646009 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 9 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9697//console This message is automatically generated. > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.21 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, TestResults-0.94.txt, TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14018472#comment-14018472 ] Kashif J S commented on HBASE-10933: Kindly review the patch > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.21 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-0.94-v2.patch, > HBASE-10933-trunk-v1.patch, TestResults-0.94.txt, TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003846#comment-14003846 ] Hadoop QA commented on HBASE-10933: --- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12645802/TestResults-trunk.txt against trunk revision . ATTACHMENT ID: 12645802 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 2 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-HBASE-Build/9554//console This message is automatically generated. > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Fix For: 0.99.0, 0.94.21 > > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, > TestResults-0.94.txt, TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003365#comment-14003365 ] Kashif J S commented on HBASE-10933: For the TRUNK version the patch fixes the issue of 1> When a table contains 1 region, and only 1 KV, then if regioninfo is missing and hbck repair is run, this will create invalid regioninfo with same start and end key. 2> JUnit TCs have been added for various scenarios > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, > TestResults-0.94.txt, TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003362#comment-14003362 ] Kashif J S commented on HBASE-10933: For the TRUNK version the patch fixes the issue of 1> When table has 1 region and zero KV, then if regioninfo is deleted, running hbck will corrupt the table and TableNotFoundException will be thrown for any subsequent operation on table 3> JUnit TCs have been added for various scenarios > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, > TestResults-0.94.txt, TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14003359#comment-14003359 ] Kashif J S commented on HBASE-10933: For 0.94.* version the patch fixes 3 issues 1> NullPointerException as reported above 2> When table has 1 region and zero KV, then if regioninfo is deleted, running hbck will corrupt the table and TableNotFoundException will be thrown for any subsequent operation on table 3> When a table contains 1 region, and only 1 KV, then if regioninfo is missing and hbck repair is run, this will create invalid regioninfo with same start and end key. 4> Table with multiple regions, all the regions dir missing (check testNoHdfsTable modification) > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Kashif J S >Priority: Critical > Attachments: HBASE-10933-0.94-v1.patch, HBASE-10933-trunk-v1.patch, > TestResults-0.94.txt, TestResults-trunk.txt > > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989272#comment-13989272 ] Kashif J S commented on HBASE-10933: I have the patch ready for 0.94.19 version. Probably this problem of NUll Pointer Exception does not exist for 0.98.* versions and the trunk. Deepak Sharma : Can you please confirm for 0.98.2. Also, there is a problem with HBaseFsck when Table containing regions do not have data Or Data is in Mem, then HBaseFsck will fail to resolve the INCONSISTENCY. I am working on the fix for that. I will update the patch for review soon. > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16, 0.98.2 >Reporter: Deepak Sharma >Assignee: Y. SREENIVASULU REDDY >Priority: Critical > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > {code} > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197) > {code} > problem i got it is because since in HbaseFsck class > {code} > private void adoptHdfsOrphan(HbckInfo hi) > {code} > we are intializing tableinfo using SortedMap tablesInfo > object > {code} > TableInfo tableInfo = tablesInfo.get(tableName); > {code} > but in private SortedMap loadHdfsRegionInfos() > {code} > for (HbckInfo hbi: hbckInfos) { > if (hbi.getHdfsHRI() == null) { > // was an orphan > continue; > } > {code} > we have check if a region is orphan then that table will can not be added in > SortedMap tablesInfo > so later while using this we get null pointer exception -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HBASE-10933) hbck -fixHdfsOrphans is not working properly it throws null pointer exception
[ https://issues.apache.org/jira/browse/HBASE-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13964003#comment-13964003 ] Deepak Sharma commented on HBASE-10933: --- this issue we face only when there is 1 region corresponding to the table since if more than 1 regions are there then as per following code tablesInfo sortedmap will have (tableName, modTInfo) for the table that has HdfsOrphan dir , since this for loop will iterate for other normal regions , for (HbckInfo hbi: hbckInfos) { if (hbi.getHdfsHRI() == null) { // was an orphan continue; } // get table name from hdfs, populate various HBaseFsck tables. String tableName = Bytes.toString(hbi.getTableName()); if (tableName == null) { // There was an entry in META not in the HDFS? LOG.warn("tableName was null for: " + hbi); continue; } TableInfo modTInfo = tablesInfo.get(tableName); if (modTInfo == null) { // only executed once per table. modTInfo = new TableInfo(tableName); Path hbaseRoot = FSUtils.getRootDir(getConf()); tablesInfo.put(tableName, modTInfo); try { HTableDescriptor htd = FSTableDescriptors.getTableDescriptor(hbaseRoot.getFileSystem(getConf()), hbaseRoot, tableName); modTInfo.htds.add(htd); } catch (IOException ioe) { if (!orphanTableDirs.containsKey(tableName)) { LOG.warn("Unable to read .tableinfo from " + hbaseRoot, ioe); //should only report once for each table errors.reportError(ERROR_CODE.NO_TABLEINFO_FILE, "Unable to read .tableinfo from " + hbaseRoot + "/" + tableName); Set columns = new HashSet(); orphanTableDirs.put(tableName, getColumnFamilyList(columns, hbi)); } } } if (!hbi.isSkipChecks()) { modTInfo.addRegionInfo(hbi); } } > hbck -fixHdfsOrphans is not working properly it throws null pointer exception > - > > Key: HBASE-10933 > URL: https://issues.apache.org/jira/browse/HBASE-10933 > Project: HBase > Issue Type: Bug > Components: hbck >Affects Versions: 0.94.16 >Reporter: Deepak Sharma >Assignee: Deepak Sharma >Priority: Critical > > if we regioninfo file is not existing in hbase region then if we run hbck > repair or hbck -fixHdfsOrphans > then it is not able to resolve this problem it throws null pointer exception > 2014-04-08 20:11:49,750 INFO [main] util.HBaseFsck > (HBaseFsck.java:adoptHdfsOrphans(470)) - Attempting to handle orphan hdfs > dir: > hdfs://10.18.40.28:54310/hbase/TestHdfsOrphans1/5a3de9ca65e587cb05c9384a3981c950 > java.lang.NullPointerException > at > org.apache.hadoop.hbase.util.HBaseFsck$TableInfo.access$000(HBaseFsck.java:1939) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphan(HBaseFsck.java:497) > at > org.apache.hadoop.hbase.util.HBaseFsck.adoptHdfsOrphans(HBaseFsck.java:471) > at > org.apache.hadoop.hbase.util.HBaseFsck.restoreHdfsIntegrity(HBaseFsck.java:591) > at > org.apache.hadoop.hbase.util.HBaseFsck.offlineHdfsIntegrityRepair(HBaseFsck.java:369) > at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:447) > at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3769) > at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3587) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.repairToFixHdfsOrphans(HbaseHbckRepair.java:244) > at > com.huawei.isap.test.smartump.hadoop.hbase.HbaseHbckRepair.setUp(HbaseHbckRepair.java:84) > at junit.framework.TestCase.runBare(TestCase.java:132) > at junit.framework.TestResult$1.protect(TestResult.java:110) > at junit.framework.TestResult.runProtected(TestResult.java:128) > at junit.framework.TestResult.run(TestResult.java:113) > at junit.framework.TestCase.run(TestCase.java:124) > at junit.framework.TestSuite.runTest(TestSuite.java:243) > at junit.framework.TestSuite.run(TestSuite.java:238) > at > org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83) > at > org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) > at > org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) > at > org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) > at > org.eclipse.jdt.