Hi,
I got the log dumped here:

2010-03-09 00:36:47,795 INFO
org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Verification
succeeded for blk_6221934658367436050_1025
2010-03-09 00:46:49,155 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 12 blocks
got processed in 11 msecs
2010-03-09 01:08:08,430 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at weliam-desktop/127.0.1.1
************************************************************/
2010-03-09 22:45:54,715 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = weliam-desktop/127.0.1.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.1
STARTUP_MSG:   build =
http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.1-rc1 -r
810220; compiled by 'oom' on Tue Sep  1 20:55:56 UTC 2009
************************************************************/
2010-03-09 22:45:55,330 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call
to localhost/127.0.0.1:9000 failed on local exception: java.io.IOException:
Connection reset by peer
 at org.apache.hadoop.ipc.Client.wrapException(Client.java:774)
at org.apache.hadoop.ipc.Client.call(Client.java:742)
 at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy4.getProtocolVersion(Unknown Source)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:346)
 at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:383)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:314)
 at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:291)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:269)
 at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:216)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1283)
 at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1238)
at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1246)
 at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1368)
Caused by: java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
 at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:233)
 at sun.nio.ch.IOUtil.read(IOUtil.java:206)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:236)
 at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.FilterInputStream.read(FilterInputStream.java:116)
at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:276)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
 at java.io.DataInputStream.readInt(DataInputStream.java:370)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
 at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)

2010-03-09 22:45:55,334 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at weliam-desktop/127.0.1.1
************************************************************/

At this point, unless I format the Namenode, the web interface for hadoop at
port 50070 is not coming back.


William

On Mon, Mar 8, 2010 at 10:59 PM, Eason.Lee <[email protected]> wrote:

> It's usually in $HADOOP_HOME/logs
>
> 2010/3/9 William Kang <[email protected]>
>
> > Hi,
> > If the namenode is not up, how can I get the logdir?
> >
> >
> > William
> >
> > On Mon, Mar 8, 2010 at 10:39 PM, Eason.Lee <[email protected]> wrote:
> >
> > > 2010/3/9 William Kang <[email protected]>
> > >
> > > > Hi Eason,
> > > > Thanks a lot for your reply. But I do have another folder which in
> not
> > > > inside /tmp. I did not use default settings.
> > > >
> > >
> > > you'd better post your configuration in detail~~
> > >
> > >
> > > > To make it clear, I will describe what happened:
> > > > 1. hadoop namenode -format
> > > > 2. start-all.sh
> > > > 3. running fine, http://localhost:50070 is accessible
> > > > 4. stop-all.sh
> > > > 5. start-all.sh, http://localhost:50070 is NOT accessible
> > > > Unless I format the namenode, the HDFS master
> > > > http://localhost:50070/dfshealth.jsp is not accessible.
> > > >
> > >
> > > Try "jps" to see if the namenode is up~~
> > > If the namenode is not up, maybe there is some error log in logdir, try
> > to
> > > post the error~~
> > >
> > >
> > > > So, I have to redo step 1, 2 again to gain access to
> > > > http://localhost:50070/dfshealth.jsp. But all data would be lost
> after
> > > > format.
> > > >
> > >
> > > format will delete the old namespace, so everything will lost~~
> > >
> > >
> > > >
> > > >
> > > > William
> > > >
> > > > On Mon, Mar 8, 2010 at 1:02 AM, Eason.Lee <[email protected]>
> wrote:
> > > >
> > > > > 2010/3/8 William Kang <[email protected]>
> > > > >
> > > > > > Hi guys,
> > > > > > Thanks for your replies. I did not put anything in /tmp. It's
> just
> > > that
> > > > > >
> > > > >
> > > > > default setting of dfs.name.dir/dfs.data.dir is set to the subdir
> in
> > > /tmp
> > > > >
> > > > > every time when I restart the hadoop, the localhost:50070 does not
> > show
> > > > up.
> > > > > > The localhost:50030 is fine. Unless I reformat namenode, I wont
> be
> > > able
> > > > > to
> > > > > > see the HDFS' web page at 50070. It did not clean /tmp
> > automatically.
> > > > But
> > > > > >
> > > > >
> > > > > It's not you clean the /tmp dir. Some operation clean it
> > > automatically~~
> > > > >
> > > > >
> > > > > > after format, everything is gone, well, it is a format. I did not
> > > > really
> > > > > > see
> > > > > > anything in log. Not sure what caused it.
> > > > > >
> > > > > >
> > > > > > William
> > > > > >
> > > > > >
> > > > > > On Mon, Mar 8, 2010 at 12:39 AM, Bradford Stephens <
> > > > > > [email protected]> wrote:
> > > > > >
> > > > > > > Yeah. Don't put things in /tmp. That's unpleasant in the long
> > run.
> > > > > > >
> > > > > > > On Sun, Mar 7, 2010 at 9:36 PM, Eason.Lee <[email protected]
> >
> > > > wrote:
> > > > > > > > Your /tmp directory is cleaned automaticly?
> > > > > > > >
> > > > > > > > Try to set dfs.name.dir/dfs.data.dir to a safe dir~~
> > > > > > > >
> > > > > > > > 2010/3/8 William Kang <[email protected]>
> > > > > > > >
> > > > > > > >> Hi all,
> > > > > > > >> I am running HDFS in Pseudo-distributed mode. Every time
> after
> > I
> > > > > > > restarted
> > > > > > > >> the machine, I have to format the namenode otherwise the
> > > > > > localhost:50070
> > > > > > > >> wont show up. It is quite annoying to do so since all the
> data
> > > > would
> > > > > > be
> > > > > > > >> lost. Does anybody know this happens? And how should I fix
> > this
> > > > > > problem?
> > > > > > > >> Many thanks.
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> William
> > > > > > > >>
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > http://www.drawntoscalehq.com --  The intuitive, cloud-scale
> > data
> > > > > > > solution. Process, store, query, search, and serve all your
> data.
> > > > > > >
> > > > > > > http://www.roadtofailure.com -- The Fringes of Scalability,
> > Social
> > > > > > > Media, and Computer Science
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Reply via email to