Hi Luis, it doesn't mean that it is working, just because there is no exception. Thanks that you appended your lsof output, because Hadoop does not support IPv6.
Please setup Hadoop correctly [1] and then use Hama. For example here is my lsof -i output: hadoop@raynor:/home/thomasjungblut$ lsof -i > COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME > java 1144 hadoop 33u IPv4 8819 0t0 TCP *:49737 (LISTEN) > java 1144 hadoop 37u IPv4 9001 0t0 TCP raynor:9001 (LISTEN) > java 1144 hadoop 47u IPv4 9222 0t0 TCP *:50070 (LISTEN) > java 1144 hadoop 52u IPv4 9429 0t0 TCP > raynor:9001->findlay:35283 (ESTABLISHED) > java 1144 hadoop 53u IPv4 9431 0t0 TCP > raynor:9001->karrigan:57345 (ESTABLISHED) > java 1249 hadoop 33u IPv4 8954 0t0 TCP *:54235 (LISTEN) > java 1249 hadoop 44u IPv4 9422 0t0 TCP *:50010 (LISTEN) > java 1249 hadoop 45u IPv4 9426 0t0 TCP *:50075 (LISTEN) > There are two ways to determine if Hadoop is setup correctly: 1. Look at the Webinterface of the Namenode [2] and see that there is no Safemode message or datanode missing. 2. Or run a sample MapReduce Job, for example WordCount [3]. If Hama is not working afterwards, you ask your next question again. Thanks and good luck :) [1] http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/ [2] http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#hdfs-name-node-web-interface [3] http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/#run-the-mapreduce-job 2011/9/15 Luis Eduardo Pineda Morales <[email protected]> > Hi all, > > I am attempting to run the distributed mode. I have HDFS running in a > single machine (pseudo-distributed mode): > > pineda@server00:~/hadoop$ jps > 472 SecondaryNameNode > 1429 Jps > 32733 NameNode > 364 DataNode > > pineda@net-server00:~/hadoop$ lsof -i > COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME > java 364 pineda 46u IPv6 2532945 TCP *:41462 (LISTEN) > java 364 pineda 52u IPv6 2533275 TCP > server00:42445->server00:54310 (ESTABLISHED) > java 364 pineda 60u IPv6 2533307 TCP *:50010 (LISTEN) > java 364 pineda 61u IPv6 2533511 TCP *:50075 (LISTEN) > java 364 pineda 66u IPv6 2533518 TCP *:50020 (LISTEN) > java 472 pineda 46u IPv6 2533286 TCP *:43098 (LISTEN) > java 472 pineda 59u IPv6 2533536 TCP *:50090 (LISTEN) > java 32733 pineda 46u IPv6 2532751 TCP *:54763 (LISTEN) > java 32733 pineda 56u IPv6 2533062 TCP server00:54310 (LISTEN) > java 32733 pineda 67u IPv6 2533081 TCP *:50070 (LISTEN) > java 32733 pineda 76u IPv6 2533276 TCP > server00:54310->server00:42445 (ESTABLISHED) > > i.e. fs.defaul.name = hdfs://server00:54310/ > > then I run hama in server04 (groom in server03, zookeeper in server05): > > pineda@server04:~/hama$ bin/start-bspd.sh > server05: starting zookeeper, logging to > /logs/hama-pineda-zookeeper-server05.out > starting bspmaster, logging to /logs/hama-pineda-bspmaster-server04.out > 2011-09-15 17:08:43.349:INFO::Logging to STDERR via > org.mortbay.log.StdErrLog > 2011-09-15 17:08:43.409:INFO::jetty-0.3.0-incubating > server03: starting groom, logging to /logs/hama-pineda-groom-server03.out > > this is my hama-site.xml file: > > <configuration> > <property> > <name>bsp.master.address</name> > <value>server04</value> > </property> > > <property> > <name>fs.default.name</name> > <value>hdfs://server00:54310</value> > </property> > > <property> > <name>hama.zookeeper.quorum</name> > <value>server05</value> > </property> > </configuration> > > > In theory I can connect to the HDFS, because I don't get any > ConnectException, but Hama doesn't run, and I get this Exception trace in my > bspmaster.log after the Jetty is bound: > > > 2011-09-15 17:08:43,409 INFO org.apache.hama.http.HttpServer: Jetty bound > to port 40013 > 2011-09-15 17:08:44,070 INFO org.apache.hama.bsp.BSPMaster: problem > cleaning system directory: null > java.io.IOException: Call to server00/192.168.122.10:54310 failed on local > exception: java.io.EOFException > at org.apache.hadoop.ipc.Client.wrapException(Client.java:775) > at org.apache.hadoop.ipc.Client.call(Client.java:743) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at $Proxy4.getProtocolVersion(Unknown Source) > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359) > at > org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106) > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207) > at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170) > at > org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95) > at org.apache.hama.bsp.BSPMaster.<init>(BSPMaster.java:263) > at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:421) > at org.apache.hama.bsp.BSPMaster.startMaster(BSPMaster.java:415) > at org.apache.hama.BSPMasterRunner.run(BSPMasterRunner.java:46) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > at org.apache.hama.BSPMasterRunner.main(BSPMasterRunner.java:56) > Caused by: java.io.EOFException > at java.io.DataInputStream.readInt(DataInputStream.java:375) > at > org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501) > at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446) > > > Do you know how to fix this? Do you know what is the directory that it is > trying to clean? > > Any idea is welcomed! > > Thanks, > Luis. -- Thomas Jungblut Berlin mobile: 0170-3081070 business: [email protected] private: [email protected]
