[
https://issues.apache.org/jira/browse/HBASE-1211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12677734#action_12677734
]
sishen commented on HBASE-1211:
-------------------------------
I think the problem may be related to the different directory structure of
stores. But the migrate check task tell me that no migrate needed.
New directory structure:
$ls -R /tmp/hbase-sishen/hbase/-ROOT-/
70236052 compaction.dir
/tmp/hbase-sishen/hbase/-ROOT-//70236052:
info
/tmp/hbase-sishen/hbase/-ROOT-//70236052/info:
3661099931949804692 576291393335904060 850021970530362024
/tmp/hbase-sishen/hbase/-ROOT-//compaction.dir:
Old directory structure:
$ls -R
70236052 compaction.dir
./70236052:
info
./70236052/info:
info mapfiles
./70236052/info/info:
437028651759549661 5612126941317436942
./70236052/info/mapfiles:
437028651759549661 5612126941317436942
./70236052/info/mapfiles/437028651759549661:
data index
./70236052/info/mapfiles/5612126941317436942:
data index
./compaction.dir:
While trying to locate the .META. table info, it will iterate the store file.
But with the old structure, the directory is skiped.
private Map<Long, StoreFile> loadStoreFiles()
throws IOException {
Map<Long, StoreFile> results = new HashMap<Long, StoreFile>();
FileStatus files[] = this.fs.listStatus(this.homedir);
for (int i = 0; files != null && i < files.length; i++) {
// Skip directories.
if (files[i].isDir()) {
continue;
}
> NPE in retries exhausted exception
> ----------------------------------
>
> Key: HBASE-1211
> URL: https://issues.apache.org/jira/browse/HBASE-1211
> Project: Hadoop HBase
> Issue Type: Bug
> Reporter: stack
> Fix For: 0.19.1, 0.20.0
>
>
> {code}
> [st...@aa0-000-12 hadoop-0.19.1]$ ./bin/hadoop
> org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 8
> java.lang.NullPointerException
> at org.apache.hadoop.hbase.util.Bytes.toString(Bytes.java:147)
> at
> org.apache.hadoop.hbase.client.RetriesExhaustedException.getMessage(RetriesExhaustedException.java:50)
> at
> org.apache.hadoop.hbase.client.RetriesExhaustedException.<init>(RetriesExhaustedException.java:40)
> at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.getRegionServerWithRetries(HConnectionManager.java:875)
> at
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:55)
> at
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:29)
> at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.listTables(HConnectionManager.java:317)
> at
> org.apache.hadoop.hbase.client.HConnectionManager$TableServers.tableExists(HConnectionManager.java:270)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:106)
> at
> org.apache.hadoop.hbase.PerformanceEvaluation.checkTable(PerformanceEvaluation.java:201)
> at
> org.apache.hadoop.hbase.PerformanceEvaluation.runNIsMoreThanOne(PerformanceEvaluation.java:217)
> at
> org.apache.hadoop.hbase.PerformanceEvaluation.runTest(PerformanceEvaluation.java:639)
> at
> org.apache.hadoop.hbase.PerformanceEvaluation.doCommandLine(PerformanceEvaluation.java:748)
> at
> org.apache.hadoop.hbase.PerformanceEvaluation.main(PerformanceEvaluation.java:768)
> {code}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.