Dear Wiki user, You have subscribed to a wiki page or wiki category on "Hama Wiki" for change notification.
The "GettingStarted" page has been changed by thomasjungblut: http://wiki.apache.org/hama/GettingStarted?action=diff&rev1=53&rev2=54 Comment: Did a lot of installation stuff and mode explaination Current Hama requires JRE 1.6 or higher and ssh to be set up between nodes in the cluster: - * hadoop-0.20.x for HDFS + * hadoop-0.20.2 for HDFS * Sun Java JDK 1.6.x or higher version + + '''Note:''' Appending releases of Hadoop, e.G. 0.20.3, may not work because of changes in the RPC Protocol! + + == Download == + + You can download Hadoop-0.20.2 here: + + http://www.apache.org/dyn/closer.cgi/hadoop/core/ + + You can download Hama here: + + http://www.apache.org/dyn/closer.cgi/incubator/hama + + == Hadoop Installation == + + We recommend the installation guide of Michael Noll: + + Single Node: + http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ + + Multi Node: + http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-multi-node-cluster/ + + == Hama Installation == + + Untar the files to your destination of choice: + {{{ + tar -xzf hama-0.3.0-incubating.tar.gz + }}} + + Don't forget to ''chown'' the directory as the same user you configured Hadoop in the step before. == Startup script == The {{{$HAMA_HOME/bin}}} directory contains some script used to start up the Hama daemons. * {{{start-bspd.sh}}} - Starts all Hama daemons, the BSPMaster, !GroomServers and Zookeeper. + + '''Note:''' You have to start Hama with the same user which is configured for Hadoop. == Configuration files == @@ -26, +59 @@ == Setting up Hama == - This section describes how to get started by setting up a Hama cluster. '''Note''': the default is a local-mode. + This section describes how to get started by setting up a Hama cluster. + + == Modes == + + Just like Hadoop, we distinct between three modes: + * Local Mode + * Pseudo Distributed Mode + * Distributed Mode + + === Local Mode === + + This mode is the default mode if you download Hama (>= 0.3.0) and install it. + When submitting a job it will run a local multithreaded BSP Engine on your server. + It can be configured via the ''"bsp.master.address"'' property to ''"local"''. + You can adjust the number of threads used in this utility by setting the ''"bsp.local.tasks.maximum"'' property. + See the "Settings" step how and where to configure this. + + === Pseudo Distributed Mode === + + This mode is when you just have a single server and want to launch all the deamon processes (BSPMaster, Groom and Zookeeper). + It can be configured when you set the ''"bsp.master.address"'' to a host address, e.G. ''"localhost"'' and put the same address into the ''"groomservers"'' file in the configuration directory. + As stated it will run a BSPMaster, a Groom and a Zookeeper on your machine. + + === Distributed Mode === + + This mode is just like the "Pseudo Distributed Mode", but you have multiple machines, which are mapped in the ''"groomservers"'' file. + + == Settings == * '''BSPMaster and Zookeeper settings''' - Figure out where to run your HDFS namenode and BSPMaster. Set the variable {{{bsp.master.address}}} to the BSPMaster's intended host:port. Set the variable {{{fs.default.name}}} to the HDFS Namenode's intended host:port. @@ -42, +102 @@ literal string "local" or a host:port for distributed mode </description> </property> - + <property> <name>fs.default.name</name> <value>hdfs://host1.mydomain.com:9000/</value> @@ -51, +111 @@ "local" or a host:port for HDFS. </description> </property> - + <property> <name>hama.zookeeper.quorum</name> <value>host1.mydomain.com,host2.mydomain.com</value> @@ -100, +160 @@ % $HAMA_HOME/bin/hama jar hama-examples-0.x.0-incubating.jar }}} + It will then offer you some examples to choose. + Refer to our examples site if you have additional questions how to use them: + http://wiki.apache.org/hama/examples + == Hama Web Interfaces == The web UI provides information about BSP job statistics of the Hama cluster, running/completed/failed jobs.