You cannot run HBase 0.20 with data from 0.19. The on-disk format has changed. A migration tool will be available for the release candidate. Right now, if you want to use 0.20, you have to load data into a new hbase instance (under a different top-level directory)
--- Jim Kellerman, Powerset (Live Search, Microsoft Corporation) > -----Original Message----- > From: llpind [mailto:[email protected]] > Sent: Thursday, June 18, 2009 9:34 AM > To: [email protected] > Subject: Re: [ANN] HBase 0.20.0-alpha available for download > > > Yeah, we did have an issue with that before. After help from here, > and some > thought into the schema, we changed the hbase schema a bit. > > Now I have a scanner which iterates over, lets say ~2000 records, > and has an > inner scanner which needs to change based on a value from the outter > scanner > (essentially a nested for loop). Performance is the issue here, I > was able > to track it down to opening a scanner. It appears HBASE-1118 > addresses this > issue. Is this correct? > > Also if someone could point me to what I'm doing wrong with Hadoop > 0.20.0 > setup. I'm able to start my master server, but none of the slave > nodes come > up (unless I list the master as the slave). They all have the > following > error on start up: > > STARTUP_MSG: Starting DataNode > STARTUP_MSG: host = slave1/192.168.0.234 > STARTUP_MSG: args = [] > STARTUP_MSG: version = 0.20.0 > STARTUP_MSG: build = > https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r > 763504; > compiled by 'ndaley' on Thu Apr 9 05:18:40 UTC 2009 > ************************************************************/ > 2009-06-18 09:06:49,369 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: > java.lang.NullPointerException > at > org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode. > java:156) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode. > java:160) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNo > de.java:246) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java > :216) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNod > e.java:1283) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode( > DataNode.java:1238) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataN > ode.java:1246) > at > org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1 > 368) > > 2009-06-18 09:06:49,370 INFO > org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: > > > After searching a bit, seems people have this problem when they > forget to > set df.default.name, but i've got it set in core-site.xml: > > <property> > <name>fs.default.name</name> > <value>hdfs://master:54310</value> > <description>The name of the default file system. A URI whose > scheme and authority determine the FileSystem implementation. > The > uri's scheme determines the config property (fs.SCHEME.impl) > naming > the FileSystem implementation class. The uri's authority is used > to > determine the host, port, etc. for a filesystem.</description> > </property> > <property> > <name>hadoop.tmp.dir</name> > <value>/data/hadoop-0.20.0-${user.name}</value> > <description>A base for other temporary directories.</description> > </property> > > > Could the .19 install be messing something up? Thanks > > stack-3 wrote: > > > > On Wed, Jun 17, 2009 at 2:34 PM, llpind <[email protected]> > wrote: > > > >> > >> Sweet thanks stack. I'll be upgrading as well. My client > program takes > >> far > >> too long to simply open a scanner. This problem appears to have > been > >> addressed in .20 (https://issues.apache.org/jira/browse/HBASE- > 1118). > > > > > > > > Or was it HBASE-867? > > St.Ack > > > > > > -- > View this message in context: http://www.nabble.com/-ANN--HBase- > 0.20.0-alpha-available-for-download-tp24067200p24095308.html > Sent from the HBase User mailing list archive at Nabble.com. >
