Ah, you're right! Jar mismatch. Thanks :-)
On Fri, Dec 19, 2008 at 8:31 PM, Ryan LeCompte <[email protected]> wrote: > Really? Hmm.... I just tested all 5 nodes and the JARs are the same, > since I've just rsync'd them over. The server doesn't spit out any > errors when it starts up... > > Could this be an hadoop/hbase jar mixmatch? > > > On Fri, Dec 19, 2008 at 7:58 PM, Andrew Purtell <[email protected]> wrote: >> That is almost certainly a HBase jar version mismatch >> between the master and the client. >> >> I had this problem once when the jars for my master and one >> regionserver were out of sync. >> >> - Andy >> >>> From: Ryan LeCompte <[email protected]> >>> Subject: Re: Region server memory requirements >>> To: [email protected] >>> Date: Friday, December 19, 2008, 3:53 PM >>> Hey Stack, >>> >>> Okay, I was able to get Hadoop 0.19 up and running with >>> Hbase-trunk. It seems to startup fine, however now when >>> I connect to the hbase shell and do a simple "list" or >>> try to create a table, I get the >>> following almost immediately in the hbase master log files: >>> >>> 2008-12-19 18:49:47,408 WARN >>> org.apache.hadoop.ipc.HBaseServer: Out of >>> Memory in server select >>> java.lang.OutOfMemoryError: Java heap space >>> at >>> org.apache.hadoop.hbase.ipc.HBaseRPC$Invocation.readFields(HBaseRPC.java:142) >>> at >>> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.processData(HBaseServer.java:846) >>> at >>> org.apache.hadoop.hbase.ipc.HBaseServer$Connection.readAndProcess(HBaseServer.java:813) >>> at >>> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.doRead(HBaseServer.java:399) >>> at >>> org.apache.hadoop.hbase.ipc.HBaseServer$Listener.run(HBaseServer.java:308) >>> 2008-12-19 18:49:49,888 INFO >>> org.apache.hadoop.hbase.master.BaseScanner: All 0 .META. >>> region(s) >>> scanned >>> >>> >>> Any ideas? >>> >>> Thanks, >>> Ryan >> >> >> >> >> >
