I started HDFS by running start-dfs.sh (also tried start-all.sh, but makes
no difference).

I was able to create directories by running: haddop dfs -mkdir <dir>
and using the hadoop java api, e.g. getting a FileSystem instance and do:
fs.mkdirs().

While trying to do FileSystem.createNewFile, I get: "Waiting to find target
node:" messages. By stepping into the source code, I see the client could
not connect to the data node.

Thanks for your help.

-John

On 11/24/06, John Wang <[EMAIL PROTECTED]> wrote:

Hi:

   After startind HDFS, I am able to create directories via the Hadoop api
and the shell app.

   However I am not able to create a new file: I keep on getting problems
connecting to the data node. (on locahost:50010)

   By going to the admin UI, I see Live Datanodes listed to be correct to
be listed: localhost:50010.

   And in the data node log, I see the line:
2006-11-24 10:05:07,261 INFO org.apache.hadoop.dfs.DataNode : Opened
server at 50
010

   So looks like the data node is alive.

    Also, by clocking on the Browse the filesystem link in the admin ui, I
am taken to the address:


http://192.168.1.3:65535/browseDirectory.jsp?namenodeInfoPort=50070&dir=%2F

    Which is not resolved.

    Any suggestions would be greatly appreciated.

Thanks

-John


Reply via email to