[
https://issues.apache.org/jira/browse/HDFS-15660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17307712#comment-17307712
]
Yiqun Lin edited comment on HDFS-15660 at 3/24/21, 9:48 AM:
------------------------------------------------------------
Hi [~weichiu], this compatible issue only happened in that old hadoop version
client doesn't contain the storage type which introduced in HDFS-9806. It's a
client side issue not the server side. As version 3.1, 3.2 and 3.3 already
contain the new storage type, it should be okay to do the upgrade. So I don't
cherry-pick to other branches.
was (Author: linyiqun):
Hi [~weichiu], this compatible issue only happened in that old hadoop version
client doesn't contain the storage type which introduced in HDFS-9806. It's a
client side issue not the server side. As version 3.1, 3.2 and 3.3 already
contain the new storage type, it should be okay to do the upgrade. So I only
push the fix to trunk.
> StorageTypeProto is not compatiable between 3.x and 2.6
> -------------------------------------------------------
>
> Key: HDFS-15660
> URL: https://issues.apache.org/jira/browse/HDFS-15660
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 3.0.0, 3.0.1, 2.9.2, 2.8.5, 2.7.7, 2.10.1
> Reporter: Ryan Wu
> Assignee: Ryan Wu
> Priority: Major
> Fix For: 2.9.3, 3.4.0, 2.10.2
>
> Attachments: HDFS-15660.002.patch, HDFS-15660.003.patch
>
>
> In our case, when nn has upgraded to 3.1.3 and dn’s version was still 2.6,
> we found hive to call getContentSummary method , the client and server was
> not compatible because of hadoop3 added new PROVIDED storage type.
> {code:java}
> // code placeholder
> 20/04/15 14:28:35 INFO retry.RetryInvocationHandler---main: Exception while
> invoking getContentSummary of class ClientNamenodeProtocolTranslatorPB over
> xxxxx/xxxxx:8020. Trying to fail over immediately.
> java.io.IOException: com.google.protobuf.ServiceException:
> com.google.protobuf.UninitializedMessageException: Message missing required
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
> at
> org.apache.hadoop.ipc.ProtobufHelper.getRemoteException(ProtobufHelper.java:47)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:819)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
> at com.sun.proxy.$Proxy11.getContentSummary(Unknown Source)
> at
> org.apache.hadoop.hdfs.DFSClient.getContentSummary(DFSClient.java:3144)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:706)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:702)
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.getContentSummary(DistributedFileSystem.java:713)
> at org.apache.hadoop.fs.shell.Count.processPath(Count.java:109)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:317)
> at
> org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at
> org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at
> org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> Caused by: com.google.protobuf.ServiceException:
> com.google.protobuf.UninitializedMessageException: Message missing required
> fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:272)
> at com.sun.proxy.$Proxy10.getContentSummary(Unknown Source)
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getContentSummary(ClientNamenodeProtocolTranslatorPB.java:816)
> ... 23 more
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing
> required fields: summary.typeQuotaInfos.typeQuotaInfo[3].type
> at
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65392)
> at
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$GetContentSummaryResponseProto$Builder.build(ClientNamenodeProtocolProtos.java:65331)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:263)
> ... 25 more
> {code}
> This compatible issue only happened in StorageType feature is used in cluster.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]