[
https://issues.apache.org/jira/browse/HADOOP-3337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12593897#action_12593897
]
Konstantin Shvachko commented on HADOOP-3337:
---------------------------------------------
DatanodeDescriptor is not sent over RPC and is not supposed to. You can never
get DatanodeDescriptor on the other end.
DatanodeDescriptor is sort of a name-node private class.
Although the actual class is DatanodeDescriptor, rpc serializes the base class
DatanodeInfo
using its Writable implementation and sends the latter over the network.
The problem here is that the serialization intended for DatanodeDescriptor
(which is only serialized to disk)
is mixed with the serialization of DatanodeInfo (which should be used only for
rpc).
We have been through this before.
I think we should introduce 2 new static methods in the DatanodeDescriptor that
would provide serialization to disk.
> Name-node fails to start because DatanodeInfo format changed.
> -------------------------------------------------------------
>
> Key: HADOOP-3337
> URL: https://issues.apache.org/jira/browse/HADOOP-3337
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.18.0
> Reporter: Konstantin Shvachko
> Assignee: Tsz Wo (Nicholas), SZE
> Priority: Blocker
> Fix For: 0.18.0
>
> Attachments: 3337_20080501.patch, 3337_20080501b.patch,
> 3337_20080502.patch, 3337_20080502b.patch
>
>
> HADOOP-3283 introduced a new field ipcPort in DatanodeInfo, which was not
> reflected in the reading/writing file system image files.
> Particularly, reading edits generated by the previous version of hadoop
> throws the following exception:
> {code}
> 08/05/02 00:02:50 ERROR dfs.NameNode: java.lang.IllegalArgumentException: No
> enum const class org.apache.hadoop.dfs.DatanodeInfo$AdminStates.0?
> /56.313
> at java.lang.Enum.valueOf(Enum.java:192)
> at org.apache.hadoop.io.WritableUtils.readEnum(WritableUtils.java:399)
> at org.apache.hadoop.dfs.DatanodeInfo.readFields(DatanodeInfo.java:318)
> at org.apache.hadoop.io.ArrayWritable.readFields(ArrayWritable.java:90)
> at org.apache.hadoop.dfs.FSEditLog.loadFSEdits(FSEditLog.java:499)
> at org.apache.hadoop.dfs.FSImage.loadFSEdits(FSImage.java:794)
> at org.apache.hadoop.dfs.FSImage.loadFSImage(FSImage.java:664)
> at org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:280)
> at org.apache.hadoop.dfs.FSDirectory.loadFSImage(FSDirectory.java:81)
> at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:276)
> at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:257)
> at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:133)
> at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:178)
> at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:164)
> at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:777)
> at org.apache.hadoop.dfs.NameNode.main(NameNode.java:786)
> {code}
> and startup fails.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.