lirui-apache commented on a change in pull request #11328: [FLINK-16455][hive] 
Introduce flink-sql-connector-hive modules to provide hive uber jars
URL: https://github.com/apache/flink/pull/11328#discussion_r389530695
 
 

 ##########
 File path: docs/dev/table/hive/index.md
 ##########
 @@ -92,6 +92,34 @@ to make the integration work in Table API program or SQL in 
SQL Client.
 Alternatively, you can put these dependencies in a dedicated folder, and add 
them to classpath with the `-C`
 or `-l` option for Table API program or SQL Client respectively.
 
+Apache Hive is built on Hadoop, so you need Hadoop dependency first, please 
refer to
+[Providing Hadoop classes]({{ site.baseurl 
}}/ops/deployment/hadoop.html#providing-hadoop-classes).
+
+There are two way to hive dependencies. First way, using bundled hive jar, 
providing multiple bundle
+hive jars that can cover all remote metastore versions. Second way, user 
defined dependencies, you
+can build your own dependencies if you need to. 
 
 Review comment:
   ```suggestion
   There are two ways to add Hive dependencies. First is to use Flink's bundled 
Hive jars. You can choose a bundled Hive jar according to the version of the 
metastore you use. Second is to add each of the required jars separately. The 
second way can be useful if the Hive version you're using is not listed here.
   ```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to