Hey Stack,
Okay, I was able to get Hadoop 0.19 up and running with Hbase-trunk.
It seems to startup fine, however now when I connect to the hbase
shell and do a simple "list" or try to create a table, I get the
following almost immediately in the hbase master log files:
2008-12-19 18:49:47,408 WARN org.apache.hadoop.ipc.HBaseServer: Out of
Memory in server select
java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:142)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:846)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:813)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:399)
at
org.apache.hadoop.hbase.ipc.HBaseServer$Listener.run(HBaseServer.java:308)
2008-12-19 18:49:49,888 INFO
org.apache.hadoop.hbase.master.BaseScanner: All 0 .META. region(s)
scanned
Any ideas?
Thanks,
Ryan
On Fri, Dec 19, 2008 at 4:56 PM, Ryan LeCompte <[email protected]> wrote:
> Stack,
>
> Thanks for responding. I'm going to experiment with using a simple
> Java object as the cell value as opposed to little tiny string cell
> values and see if that helps. I can't afford to keep increasing the
> memory since I'll eventually run out of space for my other map/reduce
> jobs.
>
> Will keep you all posted.
>
> Thanks,
> Ryan
>
>
> On Fri, Dec 19, 2008 at 4:18 PM, stack <[email protected]> wrote:
>> stack wrote:
>>>
>>> Small cell sizes use up loads of memory. See HBASE-900 for more on this.
>>>
>>> Other things to try:
>>>
>>> + hbase.io.index.interval is 32 by default. Set it to 1024 or larger in
>>> your case. Access may be a little slower -- but maybe not so bad since your
>>> cells are so small -- but the memory-resident indices will be smaller.
>>
>> Oh, pardon me, this is broke in 0.18.x hbase. See HBASE-981. Changing the
>> value has no effect.
>> St.Ack
>>
>