Dennis Kubes wrote:
I had a somewhat difficult time figuring out how to get hbase started.
In the end, it was pretty simple. Here are the steps:
1. Download hadoop from svn, untar to directory say ~/hadooptrunk and
compile through ant.
2. Move the build hadoop-xx directory to where you want to run it,
say ~/hadoop
3. Set the hadoop tmp directory in hadoop-site.xml (as default all
other variables should be file)
4. Copy scripts from ~/hadooptrunk/src/contrib/hbase/bin to
~/hadoop/src/contrib/hbase/bin
5. Format hadoop dfs through ~/hadoop/bin/hadoop namenode -format
6. Start the dfs through ~/hadoop/bin/start-dfs.sh (logs are
viewable in ~/hadoop/logs by default, don't need mapreduce for hbase)
7. Go to the hbase directory ~/hadoop/src/contrib/hbase
8. Hbase default values are fine for now, start hbase with
~/hadoop/src/contrib/hbase/bin/start-hbase.sh (logs are viewable
in ~/hadoop/logs by default)
9. Enter hbase shell with ~/hadoop/src/contrib/hbase/bin/hbase shell
10. Have fun with Hbase
11. Stop the hbase servers with
~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. Wait until the
servers are finished stopping.
12. Stop the hadoop dfs with ~/hadoop/bin/stop-dfs.sh
Hope this helps.
Did you try to run it with LocalFS / Cygwin, and if so, did you notice
any peculiarities? I tried this once, and first the start-hbase.sh
script wouldn't work (missing log files? it looked like some variables
in paths were expanded in a wrong way), and then when I started the
master and a regionserver by hand, it would complain about missing map
files and all requests would time out ... I gave up after that and moved
to HDFS.
--
Best regards,
Andrzej Bialecki <><
___. ___ ___ ___ _ _ __________________________________
[__ || __|__/|__||\/| Information Retrieval, Semantic Web
___|||__|| \| || | Embedded Unix, System Integration
http://www.sigram.com Contact: info at sigram dot com