[
https://issues.apache.org/jira/browse/HDFS-16038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17354195#comment-17354195
]
lei w commented on HDFS-16038:
------------------------------
[~LiJinglun] ,Thanks for your comment. Your understanding is correct。But in a
big cluster, update the package of the DataNode will take a long time because
we need to batch restart the datanode to load the new package。During restart ,
DataNode whick not reload the new package will not recognize the
`HAServiceState.observer` ,then it will throw exception. Should we fix it?
> DataNode Unrecognized Observer Node when cluster add an observer node
> ---------------------------------------------------------------------
>
> Key: HDFS-16038
> URL: https://issues.apache.org/jira/browse/HDFS-16038
> Project: Hadoop HDFS
> Issue Type: New Feature
> Reporter: lei w
> Priority: Critical
>
> When an Observer node is added to the cluster, the DataNode will not be able
> to recognize the HAServiceState.observer, This is because we did not upgrade
> the DataNode. Generally, it will take a long time for a big cluster to
> upgrade the DataNode . So should we add a switch to replace the Observer
> state with the Standby state when DataNode can not recognize the
> HAServiceState.observer state?
> The following are some error messages of DataNode:
> {code:java}
> 11:14:31,812 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
> IOException in offerService
> com.google.protobuf.InvalidProtocolBufferException: Message missing required
> fields: haStatus.state
> at
> com.google.protobuf.UninitializedMessageException.asInvalidProtocolBufferException(UninitializedMessageException.java:81)
> at
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]