[ 
https://issues.apache.org/jira/browse/FLINK-20237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17237983#comment-17237983
 ] 

Rui Li commented on FLINK-20237:
--------------------------------

Hi [~dwysakowicz], I'd prefer not to add hive jars to HADOOP_CLASSPATH. Because 
HADOOP_CLASSPATH is an env variable recognized by hadoop itself, e.g. in the 
[RunJar 
util|https://github.com/apache/hadoop/blob/rel/release-2.7.5/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/RunJar.java#L68].
 So we'd better not to tamper with it.

The recommended way to add hive dependency is to use a 
"flink-sql-connector-hive" uber jar. User just needs to add this one jar, as 
well as setting HADOOP_CLASSPATH, and then is ready to integrate to hive.

> Do not recommend putting hive-exec in flink/lib
> -----------------------------------------------
>
>                 Key: FLINK-20237
>                 URL: https://issues.apache.org/jira/browse/FLINK-20237
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Hive, Documentation
>    Affects Versions: 1.12.0
>            Reporter: Dawid Wysakowicz
>            Priority: Critical
>             Fix For: 1.12.0
>
>
> The Hive setup page links to Flink's Hadoop setup page which recommends 
> passing hadoop dependencies via:
> {code}
> export HADOOP_CLASSPATH=`hadoop classpath`
> {code}
> This should in 99% of cases put the hive-exec dependency on the classpath 
> already. I'd recommend removing the recommendation to put {{hive-exec}} into 
> \lib separately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to