[
https://issues.apache.org/jira/browse/FLINK-25253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17457149#comment-17457149
]
Chesnay Schepler commented on FLINK-25253:
------------------------------------------
The Hive connector depends on the hadoop-mapreduce-client-core, which is not
part of your hadoop classpath. You will need to provide that manually, by
either extending your hadoop classpath, adding it to lib, or bundling it in the
user jar.
> An ClassNotFoundException of missing Hadoop class occurred when submitting to
> yarn
> ----------------------------------------------------------------------------------
>
> Key: FLINK-25253
> URL: https://issues.apache.org/jira/browse/FLINK-25253
> Project: Flink
> Issue Type: Bug
> Components: Deployment / YARN
> Affects Versions: 1.14.0
> Environment: Environment version:
> Hadoop 3.1.1
> Hive 3.1.1
> Flink 1.4.0
> kafka 2.6.1
>
> Reporter: ghost synth
> Priority: Blocker
> Attachments: FlinkPlaySubmit.scala, flink lib.png,
> original-TropicalaLink-1.0-SNAPSHOT.jar, pom.xml, submit_log.log
>
> Original Estimate: 96h
> Remaining Estimate: 96h
>
> I use the Hive Table connector to write hive from Kafka and submit to yarn
> successfully, but it will always report during execution
> *Caused by: java.lang.ClassNotFoundException:
> org.apache.hadoop.mapred.JobConf*
> Before submitting, I executed "export HADOOP_CLASSPATH=`hadoop classpath`" to
> import hadoop dependencies.
> I found in the JM log that the classpath already contains Hadoop
> dependencies, but an exception still occurs
> The original jar that I submitted only contains code and does not contain
> dependencies,The program loads dependencies from the hadoop path and lib
> under the flink directory
> The attachment contains flink lib, code, jar and JM log
> Thanks
--
This message was sent by Atlassian Jira
(v8.20.1#820001)