Hi,

The problem is solved now. my guesses are, that there was a firewall up on the 
node, which blocked a connection request from the namenode to the datanode? i'm 
not too much into the conection setup detail yet so i can not say who connects 
to whom at what time on the datanode setup but after turning off the firewall 
the node is added w/o a problem even though it takes a bit

2008-08-15 14:58:24,588 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
dnRegistration = DatanodeRegistration(XXX:9000, storageID=, infoPort=50075, 
ipcPort=50020)
.... 40 SEC delay!?
2008-08-15 14:59:03,160 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
New storage id DS-1565240160-IP-9000-1218805143154 is assigned to data-node 
IP:9000

cheers Kai

----- "Kai Mosebach" <[EMAIL PROTECTED]> schrieb:

> Two things are interesting/noteable:
> 
> 1.) when the datanode directory is empty (at the very beginning of
> adding a new node) it says:
> 
> 2008-08-14 09:36:01,085 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Storage directory
> /data/hadoop-data/datanode is not formatted.
> 2008-08-14 09:36:01,085 INFO
> org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
> 
> 2.) when running a 2nd time or so, the log looks like below, but
> compared to a "registered" host, there is no storageID.
> I assume the config to be right, since the hadoop binaries/confs the
> same for all since the setup is homogeneous and all are served from
> the same NFS share.
> 
> Thanks Kai
> 
> ************************************************************/
> 2008-08-14 12:34:33,418 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
> /************************************************************
> STARTUP_MSG: Starting DataNode
> STARTUP_MSG:   host = XXX
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.19.0-dev
> STARTUP_MSG:   build =
> http://svn.apache.org/repos/asf/hadoop/core/trunk -r 684143; compiled
> by 'root' on Tue Aug 12 16:24:13 CEST 2008
> ************************************************************/
> 2008-08-14 12:34:33,770 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
> FSDatasetStatusMBean
> 2008-08-14 12:34:33,774 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
> 9000
> 2008-08-14 12:34:33,778 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
> 1048576 bytes/s
> 2008-08-14 12:34:33,862 INFO org.mortbay.util.Credential: Checking
> Resource aliases
> 2008-08-14 12:34:33,943 INFO org.mortbay.http.HttpServer: Version
> Jetty/5.1.4
> 2008-08-14 12:34:33,944 INFO org.mortbay.util.Container: Started
> HttpContext[/static,/static]
> 2008-08-14 12:34:33,944 INFO org.mortbay.util.Container: Started
> HttpContext[/logs,/logs]
> 2008-08-14 12:34:34,271 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-08-14 12:34:34,322 INFO org.mortbay.util.Container: Started
> WebApplicationContext[/,/]
> 2008-08-14 12:34:34,326 INFO org.mortbay.http.SocketListener: Started
> SocketListener on 0.0.0.0:50075
> 2008-08-14 12:34:34,326 INFO org.mortbay.util.Container: Started
> [EMAIL PROTECTED]
> 2008-08-14 12:34:34,335 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
> Initializing JVM Metrics with processName=DataNode, sessionId=null
> 2008-08-14 12:34:34,422 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
> Initializing RPC Metrics with hostName=DataNode, port=50020
> 2008-08-14 12:34:34,428 INFO org.apache.hadoop.ipc.Server: IPC Server
> Responder: starting
> 2008-08-14 12:34:34,429 INFO org.apache.hadoop.ipc.Server: IPC Server
> listener on 50020: starting
> 2008-08-14 12:34:34,430 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 0 on 50020: starting
> 2008-08-14 12:34:34,431 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 1 on 50020: starting
> 2008-08-14 12:34:34,431 INFO org.apache.hadoop.ipc.Server: IPC Server
> handler 2 on 50020: starting
> 2008-08-14 12:34:34,431 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
> DatanodeRegistration(XXX:9000, storageID=, infoPort=50075,
> ipcPort=50020)
> 
> > if the config is right, then this is the procedure to add a new
> > datanode. 
> > Do you see any exceptions logged in your datanode log?
> > Run it as daemon so it logs everything into a file under
> > HADOOP_LOG_DIR
> > ./bin/hadoop-daemons.sh --config $HADOOP_CONF_DIR start datanode 
> > 
> > Thanks,
> > Lohit
> > 
> > ----- Original Message ----
> > From: Kai Mosebach <[EMAIL PROTECTED]>
> > To: [email protected]
> > Sent: Thursday, August 14, 2008 1:48:02 AM
> > Subject: Dynamically adding datanodes
> > 
> > Hi,
> > 
> > how can i add a datanode dynamically to a hadoop cluster without
> > restarting the whole cluster?
> > I was trying to run "hadoop datanode" on the new datanode with the
> > appropriate config (pointing to my correct namenode) but it does
> not
> > show up there.
> > 
> > Is there a way?
> > 
> > Thanks Kai

Reply via email to