[
https://issues.apache.org/jira/browse/HBASE-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
stack updated HBASE-12782:
--------------------------
Attachment: 12782.search.txt
Here is my little search tool. It searches WALs first and then oldWALs so it is
two jobs. You run it like you run generate or verify, as an option to ITBLL.
All it takes is an input that is the verify stage output dir (it reads the
verify output to find the missing keys now saved into a sequencefile as by
product of verify).
I ran it after a verify said there were 252 UNREFERENCED rows after generate
uploaded 125M with CM doing its best.
The tool found the 252 rows in the WALs.
Let me try and search the hfiles next.
> ITBLL fails for me if generator does anything but 5M per maptask
> ----------------------------------------------------------------
>
> Key: HBASE-12782
> URL: https://issues.apache.org/jira/browse/HBASE-12782
> Project: HBase
> Issue Type: Bug
> Components: integration tests
> Affects Versions: 1.0.0
> Reporter: stack
> Priority: Critical
> Fix For: 1.0.0
>
> Attachments: 12782.search.txt, 12782.unit.test.and.it.test.txt,
> 12782.unit.test.writing.txt
>
>
> Anyone else seeing this? If I do an ITBLL with generator doing 5M rows per
> maptask, all is good -- verify passes. I've been running 5 servers and had
> one splot per server. So below works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey
> serverKilling Generator 5 5000000 g1.tmp
> or if I double the map tasks, it works:
> HADOOP_CLASSPATH="/home/stack/conf_hbase:`/home/stack/hbase/bin/hbase
> classpath`" ./hadoop/bin/hadoop --config ~/conf_hadoop
> org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList --monkey
> serverKilling Generator 10 5000000 g2.tmp
> ...but if I change the 5M to 50M or 25M, Verify fails.
> Looking into it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)