[
https://issues.apache.org/jira/browse/HDFS-1106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12863957#action_12863957
]
Ravi Phulari commented on HDFS-1106:
------------------------------------
Eugene, Any idea why these data nodes went down ?
Any chance that somebody added them in hosts.exclude file ? Please verify this,
if these nodes list is present in hosts.exclude then please remove them and
restart cluster.
> Datanode throwing UnregisteredDatanodeException -- expects itself to serve
> storage!
> -----------------------------------------------------------------------------------
>
> Key: HDFS-1106
> URL: https://issues.apache.org/jira/browse/HDFS-1106
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 0.20.1
> Reporter: Eugene Hung
>
> We run a large Hadoop cluster used by many different universities. When some
> DataNodes went down recently, they came back up and then generated this error
> message in their datanode logs:
> 2010-04-22 16:58:37,314 ERROR
> org.apache.hadoop.hdfs.server.datanode.DataNode:
> org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.protocol.UnregisteredDatanodeException: Data node
> vm-10-160-4-109:50010 is attempting to report storage ID
> DS-1884904520-10.160.4.109-50010-1255720271773. Node 10.160.4.109:50010 is
> expected to serve this storage.
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDatanode(FSNamesystem.java:3972)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.verifyNodeRegistration(FSNamesystem.java:3937)
> at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2052)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:735)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:966)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:962)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:960)
> at org.apache.hadoop.ipc.Client.call(Client.java:740)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at $Proxy4.register(Unknown Source)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:544)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1230)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1273)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1394)
> Note it is correctly expecting itself to serve the data, but throwing an
> UnregisteredDatanodeException for some reason. This is causing these
> datanodes to remain "dead" to the namenode. Does anyone know why this is
> occuring and what we can do to fix it?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.