Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for 
change notification.

The following page has been changed by stack:
http://wiki.apache.org/lucene-hadoop/Hbase/10Minutes

The comment on the change is:
Pared the instructions down even more (with Jim's help testing on cygwindows)

------------------------------------------------------------------------------
  Here are the steps involved checking out hbase and making it run.  Takes 
about ten minutes.
  
+  1. Checkout hadoop from 
[http://lucene.apache.org/hadoop/version_control.html svn] and compile with 
ant: {{{% ant clean jar compile-contrib}}}.
+  1. Optionally, move the hadoop build directory to wherever you want to run 
hbase from, say to {{{~/hadoop}}}
- There are two ways you can run HBase: on top of HDFS or on the local file 
system. By default HBase is set up to run on the local file system. Some of the 
steps below are different depending on which way you want to run HBase. These 
differences will be noted.
- 
-  1. Download hadoop from svn, untar to directory say ~/hadooptrunk and 
compile through ant.
-  1. Move the build hadoop-xx directory to where you want to run it, say 
~/hadoop
-  1. Edit hadoop-site.xml:
+  1. Edit {{{hadoop-site.xml}}}:
-   1. Set the hadoop tmp directory ({{{hadoop.tmp.dir}}}) in hadoop-site.xml
+   1. Set the hadoop tmp directory property, {{{hadoop.tmp.dir}}}
-   1. Set the default name ({{{fs.default.name}}}) to a host-port URI if 
running on HDFS
+   1. If running HDFS, set the default name {{{fs.default.name}}} to the 
namenode host-port URI (Otherwise, leave the default {{{file:///}}} value).
   1. Edit hadoop-env.sh and define {{{JAVA_HOME}}}
   1. If you are running HBase on top of HDFS:
    1. Format hadoop dfs through  ~/hadoop/bin/hadoop namenode -format
-   1. Start the dfs through ~/hadoop/bin/start-dfs.sh  (logs are viewable in 
~/hadoop/logs by default, don't need mapreduce for hbase)
+   1. Start the dfs through ~/hadoop/bin/start-dfs.sh  (HDFS logs are viewable 
in ~/hadoop/logs by default, don't need mapreduce for hbase)
    1. Edit hbase-site.xml:
   {{{
   <configuration>
-    <property>
-      <name>hbase.master</name>
-      <value>0.0.0.0:60000</value>
-      <description>The port for the hbase master web UI
-      Set to -1 if you do not want the info server to run.
-      </description>
     </property>
       <name>hbase.rootdir</name>
       <value>/hbase</value>
@@ -29, +21 @@

     </property>
   </configuration>
   }}}
-  1.#6 Go to the hbase build directory ~/hadoop/src/contrib/hbase
+  1. Go to the hbase build directory ~/hadoop/src/contrib/hbase
-  1. Start hbase with ~/hadoop/src/contrib/hbase/bin/start-hbase.sh (logs are 
viewable in ~/hadoop/logs by default)
+  1. Start hbase with ~/hadoop/src/contrib/hbase/bin/start-hbase.sh (hbase 
logs are viewable in ~/hadoop/logs by default)
   1. Enter hbase shell with ~/hadoop/src/contrib/hbase/bin/hbase shell
-  1. Have fun with Hbase
+  1. Have fun with hbase
-  1. Stop the hbase servers with ~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. 
 Wait until the servers are finished stopping.
+  1. Stop the hbase servers with ~/hadoop/src/contrib/hbase/bin/stop-hbase.sh. 
 Wait until the servers are finished stopping.  Avoid killing servers.
-  1. If you are running HBase on HDFS, stop the hadoop dfs with 
~/hadoop/bin/stop-dfs.sh
+  1. If you are running hbase on HDFS, stop the hadoop dfs with 
~/hadoop/bin/stop-dfs.sh
  
- From an list posting by Dennis Kubes, Sun, 21 Oct 2007 23:09:46 -0500.
+ Based on an hadoop-dev list posting by Dennis Kubes, Sun, 21 Oct 2007 
23:09:46 -0500.
  

Reply via email to