This problem appears to be unrelated to my use of a scanner, or the
client code.
If in the hbase shell I run
count 'table'
it also gets stuck, at around record number 10,000,
Is this a corrupted table? Is there any way to repair?
On 13/01/12 23:03, Joel Halbert wrote:
It always hangs waiting on the same record....
On 13/01/12 22:48, Joel Halbert wrote:
Successfully got a few thousand results....nothing exceptional in the
hbase log:
|2012-01-13 22:42:13,830 INFO
org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2012-01-13 22:42:13,832 INFO
org.apache.hadoop.io.compress.CodecPool: Got brand-new decompressor
2012-01-13 22:42:32,580 DEBUG
org.apache.hadoop.hbase.io.hfile.LruBlockCache: LRUStats:
total=332.03 MB, free=61.32 MB, max=393.35 MB, blocks=1524,
accesses=720942, hits=691565, hitRatio=95.92%%,
cachingAccesses=720938, cachingHits=691565,
cachingHitsRatio=95.92%%, evictions=149, evicted=27849,
evictedPerRun=186.90603637695312
2012-01-13 22:42:36,222 DEBUG
org.apache.hadoop.hbase.master.LoadBalancer: Server information:
localhost.localdomain,59902,1326492448413=15
2012-01-13 22:42:36,223 INFO
org.apache.hadoop.hbase.master.LoadBalancer: Skipping load
balancing. servers=1 regions=15 average=15.0 mostloaded=15
leastloaded=15
2012-01-13 22:42:36,236 DEBUG
org.apache.hadoop.hbase.master.CatalogJanitor: Scanned 14 catalog
row(s) and gc'd0 unreferenced parent region(s)|
On 13/01/12 22:46, T Vinod Gupta wrote:
did u get any scan results at all?
check your region server and master hbase logs for any warnings..
also, just fyi - the standalone version of hbase is not super stable. i
have had many similar problems in the past. the distributed mode is
much
much robust.
thanks
On Fri, Jan 13, 2012 at 2:36 PM, Joel
Halbert<[email protected]> wrote:
I have a standalone instance of HBASE (single instance, on localhost).
After reading a few thousand records using a scanner my thread is
stuck
waiting:
"main" prio=10 tid=0x00000000016d4800 nid=0xf3a in Object.wait()
[0x00007fbe96dc3000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.**java:503)
at org.apache.hadoop.hbase.ipc.**HBaseClient.call(HBaseClient.**
java:757)
- locked<0x00000007e2ba21d0> (a org.apache.hadoop.hbase.ipc.**
HBaseClient$Call)
at org.apache.hadoop.hbase.ipc.**HBaseRPC$Invoker.invoke(**
HBaseRPC.java:257)
at $Proxy4.next(Unknown Source)
at org.apache.hadoop.hbase.**client.ScannerCallable.call(**
ScannerCallable.java:79)
at org.apache.hadoop.hbase.**client.ScannerCallable.call(**
ScannerCallable.java:38)
at org.apache.hadoop.hbase.**client.HConnectionManager$**
HConnectionImplementation.**getRegionServerWithRetries(**
HConnectionManager.java:1019)
at org.apache.hadoop.hbase.**client.MetaScanner.metaScan(**
MetaScanner.java:182)
at org.apache.hadoop.hbase.**client.MetaScanner.metaScan(**
MetaScanner.java:95)
at org.apache.hadoop.hbase.**client.HConnectionManager$**
HConnectionImplementation.**prefetchRegionCache(**
HConnectionManager.java:649)
at org.apache.hadoop.hbase.**client.HConnectionManager$**
HConnectionImplementation.**locateRegionInMeta(**
HConnectionManager.java:703)
- locked<0x00000007906dfcf8> (a java.lang.Object)
at org.apache.hadoop.hbase.**client.HConnectionManager$**
HConnectionImplementation.**locateRegion(**HConnectionManager.java:594)
at org.apache.hadoop.hbase.**client.HConnectionManager$**
HConnectionImplementation.**locateRegion(**HConnectionManager.java:559)
at org.apache.hadoop.hbase.**client.HConnectionManager$**
HConnectionImplementation.**getRegionLocation(**
HConnectionManager.java:416)
at
org.apache.hadoop.hbase.**client.ServerCallable.**instantiateServer(
**ServerCallable.java:57)
at org.apache.hadoop.hbase.**client.ScannerCallable.**
instantiateServer(**ScannerCallable.java:63)
at org.apache.hadoop.hbase.**client.HConnectionManager$**
HConnectionImplementation.**getRegionServerWithRetries(**
HConnectionManager.java:1018)
at org.apache.hadoop.hbase.**client.HTable$ClientScanner.**
nextScanner(HTable.java:1104)
at org.apache.hadoop.hbase.**client.HTable$ClientScanner.**
next(HTable.java:1196)
at org.apache.hadoop.hbase.**client.HTable$ClientScanner$1.**
hasNext(HTable.java:1256)
at crawler.cache.PageCache.**accept(PageCache.java:254)
Concretely, it is stuck on the iterator.next method:
Scan scan = new Scan(Bytes.toBytes(**hostnameTarget),
Bytes.toBytes(hostnameTarget + (char) 127));
scan.setMaxVersions(1);
scan.setCaching(4);
ResultScanner resscan = table.getScanner(scan);
Iterator<Result> it = resscan.iterator();
while (it.hasNext()) { // stuck here
Any clues?