Hi all!

I finally managed to setup and run Hama in fully distributed mode (thanks a lot 
to Thomas Jungblut!)

I'm using Hama 0.3.0 and Hadoop 0.20.2 with IPv4 as in 
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/

Same settings didn't work with Hadoop 0.20.203 (said to be the most recent 
stable version).
Hope these settings are useful for you.

Luis


On 15 Sep 2011, at 19:25, Thomas Jungblut wrote:

> Hey, I'm sorry, the IPv6 was misleading.
> On your screenshot I see that you are using an Append version of Hadoop.
> Did you try it with 0.20.2?
> 
> 2011/9/15 Luis Eduardo Pineda Morales <[email protected]>
> Hi Thomas, apparently IPv6 wasn't the problem, since now hadoop is running in 
> IPv4 and i still get the same exceptions in hama.
> 
> pineda@server00:~/hadoop$ jps
> 10592 NameNode
> 10922 Jps
> 10695 DataNode
> 10844 SecondaryNameNode
> 
> pineda@server00:~/hadoop$ lsof -i
> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
> java    10592 pineda   46u  IPv4 2559447       TCP *:50272 (LISTEN)
> java    10592 pineda   56u  IPv4 2559684       TCP server00:54310 (LISTEN)
> java    10592 pineda   67u  IPv4 2559694       TCP *:50070 (LISTEN)
> java    10592 pineda   71u  IPv4 2559771       TCP 
> server00:54310->server00:51666 (ESTABLISHED)
> java    10592 pineda   72u  IPv4 2559810       TCP 
> server00:51668->server00:54310 (ESTABLISHED)
> java    10592 pineda   73u  IPv4 2559811       TCP 
> server00:54310->server00:51668 (ESTABLISHED)
> java    10592 pineda   77u  IPv4 2560218       TCP 
> server00:54310->server00:51671 (ESTABLISHED)
> java    10695 pineda   46u  IPv4 2559682       TCP *:44935 (LISTEN)
> java    10695 pineda   52u  IPv4 2559764       TCP 
> server00:51666->server00:54310 (ESTABLISHED)
> java    10695 pineda   60u  IPv4 2559892       TCP *:50010 (LISTEN)
> java    10695 pineda   61u  IPv4 2559899       TCP *:50075 (LISTEN)
> java    10695 pineda   66u  IPv4 2560208       TCP *:50020 (LISTEN)
> java    10844 pineda   46u  IPv4 2560204       TCP *:41188 (LISTEN)
> java    10844 pineda   52u  IPv4 2560217       TCP 
> server00:51671->server00:54310 (ESTABLISHED)
> java    10844 pineda   59u  IPv4 2560225       TCP *:50090 (LISTEN)
> 
> 
> also the web interface doesn't show any errors:   and I'm able to run hadoop 
> shell commands.  Any other idea? :-/
> 
> Luis
> 
> 
> 
> 
> On 15 Sep 2011, at 18:17, Thomas Jungblut wrote:
> 
> > Hi Luis,
> >
> > it doesn't mean that it is working, just because there is no exception.
> > Thanks that you appended your lsof output, because Hadoop does not support
> > IPv6.
> >
> > Please setup Hadoop correctly [1] and then use Hama.
> > For example here is my lsof -i output:
> >
> > hadoop@raynor:/home/thomasjungblut$ lsof -i
> >> COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
> >> java    1144 hadoop   33u  IPv4   8819      0t0  TCP *:49737 (LISTEN)
> >> java    1144 hadoop   37u  IPv4   9001      0t0  TCP raynor:9001 (LISTEN)
> >> java    1144 hadoop   47u  IPv4   9222      0t0  TCP *:50070 (LISTEN)
> >> java    1144 hadoop   52u  IPv4   9429      0t0  TCP
> >> raynor:9001->findlay:35283 (ESTABLISHED)
> >> java    1144 hadoop   53u  IPv4   9431      0t0  TCP
> >> raynor:9001->karrigan:57345 (ESTABLISHED)
> >> java    1249 hadoop   33u  IPv4   8954      0t0  TCP *:54235 (LISTEN)
> >> java    1249 hadoop   44u  IPv4   9422      0t0  TCP *:50010 (LISTEN)
> >> java    1249 hadoop   45u  IPv4   9426      0t0  TCP *:50075 (LISTEN)
> >>
> >
> > There are two ways to determine if Hadoop is setup correctly:
> >
> >   1. Look at the Webinterface of the Namenode [2] and see that there is no
> >   Safemode message or datanode missing.
> >   2. Or run a sample MapReduce Job, for example WordCount [3].
> >
> > If Hama is not working afterwards, you ask your next question again.
> >
> > Thanks and good luck :)
> >
> > [1]
> > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
> > [2]
> > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface
> > [3]
> > http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job
> >
> >
> > 2011/9/15 Luis Eduardo Pineda Morales <[email protected]>
> >
> >> Hi all,
> >>
> >> I am attempting to run the distributed mode. I have HDFS running in a
> >> single machine (pseudo-distributed mode):
> >>
> >> pineda@server00:~/hadoop$ jps
> >> 472 SecondaryNameNode
> >> 1429 Jps
> >> 32733 NameNode
> >> 364 DataNode
> >>
> >> pineda@server00:~/hadoop$ lsof -i
> >> COMMAND   PID   USER   FD   TYPE  DEVICE SIZE NODE NAME
> >> java      364 pineda   46u  IPv6 2532945       TCP *:41462 (LISTEN)
> >> java      364 pineda   52u  IPv6 2533275       TCP
> >> server00:42445->server00:54310 (ESTABLISHED)
> >> java      364 pineda   60u  IPv6 2533307       TCP *:50010 (LISTEN)
> >> java      364 pineda   61u  IPv6 2533511       TCP *:50075 (LISTEN)
> >> java      364 pineda   66u  IPv6 2533518       TCP *:50020 (LISTEN)
> >> java      472 pineda   46u  IPv6 2533286       TCP *:43098 (LISTEN)
> >> java      472 pineda   59u  IPv6 2533536       TCP *:50090 (LISTEN)
> >> java    32733 pineda   46u  IPv6 2532751       TCP *:54763 (LISTEN)
> >> java    32733 pineda   56u  IPv6 2533062       TCP server00:54310 (LISTEN)
> >> java    32733 pineda   67u  IPv6 2533081       TCP *:50070 (LISTEN)
> >> java    32733 pineda   76u  IPv6 2533276       TCP
> >> server00:54310->server00:42445 (ESTABLISHED)
> >>
> >> i.e.    fs.defaul.name  =  hdfs://server00:54310/
> >>
> >> then I run hama in server04 (groom in server03, zookeeper in server05):
> >>
> >> pineda@server04:~/hama$ bin/start-bspd.sh
> >> server05: starting zookeeper, logging to
> >> /logs/hama-pineda-zookeeper-server05.out
> >> starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out
> >> 2011-09-15 17:08:43.349:INFO::Logging to STDERR via
> >> org.mortbay.log.StdErrLog
> >> 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating
> >> server03: starting groom, logging to /logs/hama-pineda-groom-server03.out
> >>
> >> this is my hama-site.xml file:
> >>
> >> <configuration>
> >> <property>
> >>   <name>bsp.master.address</name>
> >>    <value>server04</value>
> >>  </property>
> >>
> >> <property>
> >>   <name>fs.default.name</name>
> >>    <value>hdfs://server00:54310</value>
> >>  </property>
> >>
> >> <property>
> >>   <name>hama.zookeeper.quorum</name>
> >>    <value>server05</value>
> >> </property>
> >> </configuration>
> >>
> >>
> >> In theory I can connect to the HDFS, because I don't get any
> >> ConnectException, but Hama doesn't run, and I get this Exception trace in 
> >> my
> >> bspmaster.log after the Jetty is bound:
> >>
> >>
> >> 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty bound
> >> to port 40013
> >> 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem
> >> cleaning system directory: null
> >> java.io.IOException: Call to server00/192.168.122.10:54310 failed on local
> >> exception: java.io.EOFException
> >>       at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
> >>       at org.apache.hadoop.ipc.Client.call(Client.java:743)
> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> >>       at $Proxy4.getProtocolVersion(Unknown Source)
> >>       at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
> >>        at
> >> org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
> >>       at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
> >>       at
> >> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
> >>       at
> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
> >>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
> >>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
> >>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
> >>       at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263)
> >>        at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421)
> >>       at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415)
> >>       at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46)
> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> >>       at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> >>       at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56)
> >> Caused by: java.io.EOFException
> >>       at java.io.DataInputStream.readInt(DataInputStream.java:375)
> >>       at
> >> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
> >>       at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
> >>
> >>
> >> Do you know how to fix this? Do you know what is the directory that it is
> >> trying to clean?
> >>
> >> Any idea is welcomed!
> >>
> >> Thanks,
> >> Luis.
> >
> >
> >
> >
> > --
> > Thomas Jungblut
> > Berlin
> >
> > mobile: 0170-3081070
> >
> > business: [email protected]
> > private: [email protected]
> 
> 
> 
> 
> 
> -- 
> Thomas Jungblut
> Berlin
> 
> mobile: 0170-3081070
> 
> business: [email protected]
> private: [email protected]

Reply via email to