[
https://issues.apache.org/jira/browse/HDDS-6926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17617910#comment-17617910
]
George Jahad commented on HDDS-6926:
------------------------------------
> I think the problem is that Ozone generates protobuf classes using the shaded
> protobuf.
I don't think this is right. I think Ozone uses currently unshaded protobufs.
The class that is throwing the exception is called ProtobufRpcEngine. It comes
in two different versions, one which uses shaded protobufs, the other which
uses unshaded protobufs.
In the spark distro, ProtobufRpcEngine comes from this jar:
hadoop-client-api-3.3.1.jar
In that jar the ProtobufRpcEngine class uses shaded protobufs:
org.apache.hadoop.shaded.com.google.protobuf.MessageIn the ozone distro,
ProtobufRpcEngine comes from this jar: hadoop-common-3.3.4.jar
In that jar the ProtobufRpcEngine class uses unshaded protobufs:
com.google.protobuf.MessageSo my question is why do hadoop-client-api-3.3.1.jar
and hadoop-common-3.3.4.jar do it differently from each other? Is that a bug
in the hadoop build?
8:15
Another interesting point is that both jars also contain the new version of the
rpcEngine, ProtobufRpcEngine2. For that class, both jars are shaded
identically, (which is what I would expect.) The shaded protobufs there look
like so: org/apache/hadoop/thirdparty/protobuf/Message
> Cannot be cast to org.apache.hadoop.shaded.com.google.protobuf.Message
> ----------------------------------------------------------------------
>
> Key: HDDS-6926
> URL: https://issues.apache.org/jira/browse/HDDS-6926
> Project: Apache Ozone
> Issue Type: Bug
> Components: OM
> Affects Versions: 1.2.1
> Environment: Ozone version:
> 1.2.1 and 1.3.0
> Spark version:
> 3.2.1
> Reporter: MLikeWater
> Priority: Major
>
> The test process is as follows:
> 1. Download and use the official Spark installation package:
> {code:java}
> spark-3.2.1-bin-hadoop3.2.tgz
> {code}
> 2. Place the ozone configuration file *ozone-site.xml* in ${SPARK_HOME}/conf
> and *ozone-filesystem-hadoop3-1.2.1.jar* in ${SPARK_HOME}/jars:
> {code:java}
> $ ll spark-3.2.1-bin-hadoop3.2/jars/hadoop-*
> rw-rr- 1 hadoop hadoop 19406393 Jun 17 11:27
> spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> rw-rr- 1 hadoop hadoop 31717292 Jun 17 11:27
> spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-runtime-3.3.1.jar
> rw-rr- 1 hadoop hadoop 3362359 Jun 17 11:27
> spark-3.2.1-bin-hadoop3.2/jars/hadoop-shaded-guava-1.1.1.jar
> rw-rr- 1 hadoop hadoop 56507 Jun 17 11:27
> spark-3.2.1-bin-hadoop3.2/jars/hadoop-yarn-server-web-proxy-3.3.1.jar
> \$ ll
> spark-3.2.1-bin-hadoop3.2/jars/ozone-filesystem-hadoop3-1.3.0-SNAPSHOT.jar
> rw-rr- 1 hadoop hadoop 61732949 Jun 18 15:15
> spark-3.2.1-bin-hadoop3.2/jars/ozone-filesystem-hadoop3-1.3.0-SNAPSHOT.jar
> \$ ll spark-3.2.1-bin-hadoop3.2/conf/ozone-site.xml
> rw-rr- 1 hadoop hadoop 10692 Jun 18 15:28
> spark-3.2.1-bin-hadoop3.2/conf/ozone-site.xml{code}
> However, Spark SQL fails to access Ozone data. The error is as follows:
> {code:java}
> 22/06/19 07:15:59 DEBUG FileSystem: Starting: Acquiring creator semaphore for
> ofs://cluster1/tgwarehouse
> 22/06/19 07:15:59 DEBUG FileSystem: Acquiring creator semaphore for
> ofs://cluster1/tgwarehouse: duration 0:00.000s
> 22/06/19 07:15:59 DEBUG Tracer: sampler.classes = ; loaded no samplers
> 22/06/19 07:15:59 DEBUG Tracer: span.receiver.classes = ; loaded no span
> receivers
> 22/06/19 07:15:59 DEBUG FileSystem: Starting: Creating FS
> ofs://cluster1/tgwarehouse
> 22/06/19 07:15:59 DEBUG FileSystem: Loading filesystems
> 22/06/19 07:15:59 DEBUG FileSystem: nullscan:// = class
> org.apache.hadoop.hive.ql.io.NullScanFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hive-exec-2.3.9-core.jar
> 22/06/19 07:15:59 DEBUG FileSystem: file:// = class
> org.apache.hadoop.fs.LocalFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: file:// = class
> org.apache.hadoop.hive.ql.io.ProxyLocalFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hive-exec-2.3.9-core.jar
> 22/06/19 07:15:59 DEBUG FileSystem: viewfs:// = class
> org.apache.hadoop.fs.viewfs.ViewFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: har:// = class
> org.apache.hadoop.fs.HarFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: http:// = class
> org.apache.hadoop.fs.http.HttpFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: https:// = class
> org.apache.hadoop.fs.http.HttpsFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: hdfs:// = class
> org.apache.hadoop.hdfs.DistributedFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: webhdfs:// = class
> org.apache.hadoop.hdfs.web.WebHdfsFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: swebhdfs:// = class
> org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/hadoop-client-api-3.3.1.jar
> 22/06/19 07:15:59 DEBUG FileSystem: o3fs:// = class
> org.apache.hadoop.fs.ozone.OzoneFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/ozone-filesystem-hadoop3-1.3.0-SNAPSHOT.jar
> 22/06/19 07:15:59 DEBUG FileSystem: ofs:// = class
> org.apache.hadoop.fs.ozone.RootedOzoneFileSystem from
> /opt/hadoop/kyuubi/20220618/apache-kyuubi-1.6.0-SNAPSHOT-bin-spark-3.2/externals/spark-3.2.1-bin-hadoop3.2/jars/ozone-filesystem-hadoop3-1.3.0-SNAPSHOT.jar
> 22/06/19 07:15:59 DEBUG FileSystem: Looking for FS supporting ofs
> 22/06/19 07:15:59 DEBUG FileSystem: looking for configuration option
> fs.ofs.impl
> 22/06/19 07:15:59 DEBUG FileSystem: Filesystem ofs defined in configuration
> option
> 22/06/19 07:15:59 DEBUG FileSystem: FS for ofs is class
> org.apache.hadoop.fs.ozone.RootedOzoneFileSystem
> 22/06/19 07:15:59 DEBUG Server: rpcKind=RPC_PROTOCOL_BUFFER,
> rpcRequestWrapperClass=class
> org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest,
> rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@66933239
> 22/06/19 07:15:59 DEBUG Client: getting client out of cache:
> Client-3da530302dcb460d8e283786621c481f
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: RetryProxy: OM om1:
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OMRequest
> cannot be cast to org.apache.hadoop.shaded.com.google.protobuf.Message
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 1, nodeId: om2
> 22/06/19 07:15:59 DEBUG RetryInvocationHandler: java.lang.ClassCastException:
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OMRequest
> cannot be cast to org.apache.hadoop.shaded.com.google.protobuf.Message, while
> invoking $Proxy26.submitRequest over
> nodeId=om1,nodeAddress=tg-local-bdworker-1.tg.mt.com:9862. Trying to failover
> immediately.
> java.lang.ClassCastException:
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OMRequest
> cannot be cast to org.apache.hadoop.shaded.com.google.protobuf.Message
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:233)
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:123)
> at com.sun.proxy.$Proxy26.submitRequest(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> at com.sun.proxy.$Proxy26.submitRequest(Unknown Source)
> at
> org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:80)
> at
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:282)
> at
> org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1440)
> at
> org.apache.hadoop.ozone.client.rpc.RpcClient.<init>(RpcClient.java:235)
> at
> org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:247)
> at
> org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:114)
> at
> org.apache.hadoop.fs.ozone.BasicRootedOzoneClientAdapterImpl.<init>(BasicRootedOzoneClientAdapterImpl.java:179)
> at
> org.apache.hadoop.fs.ozone.RootedOzoneClientAdapterImpl.<init>(RootedOzoneClientAdapterImpl.java:51)
> at
> org.apache.hadoop.fs.ozone.RootedOzoneFileSystem.createAdapter(RootedOzoneFileSystem.java:92)
> at
> org.apache.hadoop.fs.ozone.BasicRootedOzoneFileSystem.initialize(BasicRootedOzoneFileSystem.java:149)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
> at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:288)
> at
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider$.hadoopFSsToAccess(HadoopFSDelegationTokenProvider.scala:176)
> at
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.obtainDelegationTokens(HadoopFSDelegationTokenProvider.scala:50)
> at
> org.apache.spark.deploy.security.HadoopDelegationTokenManager.$anonfun$obtainDelegationTokens$2(HadoopDelegationTokenManager.scala:164)
> at
> scala.collection.TraversableLike.$anonfun$flatMap$1(TraversableLike.scala:293)
> at scala.collection.Iterator.foreach(Iterator.scala:943)
> at scala.collection.Iterator.foreach$(Iterator.scala:943)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
> at
> scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:214)
> at scala.collection.TraversableLike.flatMap(TraversableLike.scala:293)
> at
> scala.collection.TraversableLike.flatMap$(TraversableLike.scala:290)
> at scala.collection.AbstractTraversable.flatMap(Traversable.scala:108)
> at
> org.apache.spark.deploy.security.HadoopDelegationTokenManager.org$apache$spark$deploy$security$HadoopDelegationTokenManager$$obtainDelegationTokens(HadoopDelegationTokenManager.scala:162)
> at
> org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anon$2.run(HadoopDelegationTokenManager.scala:148)
> at
> org.apache.spark.deploy.security.HadoopDelegationTokenManager$$anon$2.run(HadoopDelegationTokenManager.scala:146)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
> at
> org.apache.spark.deploy.security.HadoopDelegationTokenManager.obtainDelegationTokens(HadoopDelegationTokenManager.scala:146)
> at
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.$anonfun$start$1(CoarseGrainedSchedulerBackend.scala:555)
> at
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.$anonfun$start$1$adapted(CoarseGrainedSchedulerBackend.scala:549)
> at scala.Option.foreach(Option.scala:407)
> at
> org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.start(CoarseGrainedSchedulerBackend.scala:549)
> at
> org.apache.spark.scheduler.cluster.k8s.KubernetesClusterSchedulerBackend.start(KubernetesClusterSchedulerBackend.scala:95)
> at
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:220)
> at org.apache.spark.SparkContext.<init>(SparkContext.scala:581)
> at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2690)
> at
> org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$2(SparkSession.scala:949)
> at scala.Option.getOrElse(Option.scala:189)
> at
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:943)
> at
> org.apache.kyuubi.engine.spark.SparkSQLEngine$.createSpark(SparkSQLEngine.scala:186)
> at
> org.apache.kyuubi.engine.spark.SparkSQLEngine$.main(SparkSQLEngine.scala:277)
> at
> org.apache.kyuubi.engine.spark.SparkSQLEngine.main(SparkSQLEngine.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
> at
> org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:955)
> at
> org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:165)
> at
> org.apache.spark.deploy.SparkSubmit$$anon$1.run(SparkSubmit.scala:163)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
> at
> org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:163)
> at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
> at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
> at
> org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: Failing over OM from om1:0
> to om2:1
> 22/06/19 07:15:59 DEBUG Client: getting client out of cache:
> Client-3da530302dcb460d8e283786621c481f
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: RetryProxy: OM om2: null
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 2, nodeId: om3
> 22/06/19 07:15:59 INFO RetryInvocationHandler:
> java.lang.IllegalStateException, while invoking $Proxy26.submitRequest over
> nodeId=om2,nodeAddress=tg-local-bdworker-2.tg.mt.com:9862 after 1 failover
> attempts. Trying to failover immediately.
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: Failing over OM from om2:1
> to om3:2
> 22/06/19 07:15:59 DEBUG Client: getting client out of cache:
> Client-3da530302dcb460d8e283786621c481f
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: RetryProxy: OM om3: null
> 22/06/19 07:15:59 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 0, nodeId: om1
> 22/06/19 07:15:59 INFO RetryInvocationHandler:
> java.lang.IllegalStateException, while invoking $Proxy26.submitRequest over
> nodeId=om3,nodeAddress=tg-local-bdworker-3.tg.mt.com:9862 after 2 failover
> attempts. Trying to failover after sleeping for 2000ms.
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: Failing over OM from om3:2
> to om1:0
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: RetryProxy: OM om1: null
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 1, nodeId: om2
> 22/06/19 07:16:01 INFO RetryInvocationHandler:
> java.lang.IllegalStateException, while invoking $Proxy26.submitRequest over
> nodeId=om1,nodeAddress=tg-local-bdworker-1.tg.mt.com:9862 after 3 failover
> attempts. Trying to failover immediately.
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: Failing over OM from om1:0
> to om2:1
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: RetryProxy: OM om2: null
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 2, nodeId: om3
> 22/06/19 07:16:01 INFO RetryInvocationHandler:
> java.lang.IllegalStateException, while invoking $Proxy26.submitRequest over
> nodeId=om2,nodeAddress=tg-local-bdworker-2.tg.mt.com:9862 after 4 failover
> attempts. Trying to failover immediately.
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: Failing over OM from om2:1
> to om3:2
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: RetryProxy: OM om3: null
> 22/06/19 07:16:01 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 0, nodeId: om1
> 22/06/19 07:16:01 INFO RetryInvocationHandler:
> java.lang.IllegalStateException, while invoking $Proxy26.submitRequest over
> nodeId=om3,nodeAddress=tg-local-bdworker-3.tg.mt.com:9862 after 5 failover
> attempts. Trying to failover after sleeping for 2000ms.
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: Failing over OM from om3:2
> to om1:0
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: RetryProxy: OM om1: null
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 1, nodeId: om2
> 22/06/19 07:16:03 INFO RetryInvocationHandler:
> java.lang.IllegalStateException, while invoking $Proxy26.submitRequest over
> nodeId=om1,nodeAddress=tg-local-bdworker-1.tg.mt.com:9862 after 6 failover
> attempts. Trying to failover immediately.
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: Failing over OM from om1:0
> to om2:1
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: RetryProxy: OM om2: null
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 2, nodeId: om3
> 22/06/19 07:16:03 INFO RetryInvocationHandler:
> java.lang.IllegalStateException, while invoking $Proxy26.submitRequest over
> nodeId=om2,nodeAddress=tg-local-bdworker-2.tg.mt.com:9862 after 7 failover
> attempts. Trying to failover immediately.
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: Failing over OM from om2:1
> to om3:2
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: RetryProxy: OM om3: null
> 22/06/19 07:16:03 DEBUG OMFailoverProxyProvider: Incrementing OM proxy index
> to 0, nodeId: om1{code}
> If I use the following method to build Spark, the access is successful:
> {code:java}
> mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=3.2.2 -Phive -Phive-thriftserver
> -Pkubernetes -DskipTests clean package{code}
>
> Just use -Phadoop-2.7 to compile and ${SPARK_HOME}/jars contains the hadoop
> jar:
> {code:java}
> $ ll hadoop-*
> rw-rr- 1 hadoop hadoop 60199 Apr 21 11:15 hadoop-annotations-3.2.2.jar
> rw-rr- 1 hadoop hadoop 138914 Apr 21 11:15 hadoop-auth-3.2.2.jar
> rw-rr- 1 hadoop hadoop 44107 Apr 21 11:15 hadoop-client-3.2.2.jar
> rw-rr- 1 hadoop hadoop 4182881 Apr 21 11:15 hadoop-common-3.2.2.jar
> rw-rr- 1 hadoop hadoop 5139052 Apr 21 11:15 hadoop-hdfs-client-3.2.2.jar
> rw-rr- 1 hadoop hadoop 805828 Apr 21 11:15
> hadoop-mapreduce-client-common-3.2.2.jar
> rw-rr- 1 hadoop hadoop 1658798 Apr 21 11:15
> hadoop-mapreduce-client-core-3.2.2.jar
> rw-rr- 1 hadoop hadoop 85841 Apr 21 11:15
> hadoop-mapreduce-client-jobclient-3.2.2.jar
> rw-rr- 1 hadoop hadoop 3289927 Apr 21 11:15 hadoop-yarn-api-3.2.2.jar
> rw-rr- 1 hadoop hadoop 325110 Apr 21 11:15 hadoop-yarn-client-3.2.2.jar
> rw-rr- 1 hadoop hadoop 2919326 Apr 21 11:15 hadoop-yarn-common-3.2.2.jar
> rw-rr- 1 hadoop hadoop 80795 Apr 21 11:15
> hadoop-yarn-server-web-proxy-3.2.2.jar{code}
> Does not currently support Shaded Hadoop jars in Ozone?
> Thanks!
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]