[
https://issues.apache.org/jira/browse/FLINK-20235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jingsong Lee closed FLINK-20235.
--------------------------------
Resolution: Fixed
> Missing Hive dependencies
> -------------------------
>
> Key: FLINK-20235
> URL: https://issues.apache.org/jira/browse/FLINK-20235
> Project: Flink
> Issue Type: Bug
> Components: Connectors / Hive
> Affects Versions: 1.12.0
> Environment: hive 2.3.4
> hadoop 2.7.4
> Reporter: Dawid Wysakowicz
> Assignee: Jingsong Lee
> Priority: Blocker
> Labels: pull-request-available
> Fix For: 1.12.0
>
>
> I tried following the setup here:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/#dependencies
> I put the flink-sql-connector-hive-2.3.6 in the {{\lib}} directory and tried
> running queries (as described in
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/hive_streaming.html)
> via {{sql-client}}.
> {code}
> SET table.sql-dialect=hive;
> CREATE TABLE hive_table (
> user_id STRING,
> order_amount DOUBLE
> ) PARTITIONED BY (dt STRING, hr STRING) STORED AS parquet TBLPROPERTIES (
> 'partition.time-extractor.timestamp-pattern'='$dt $hr:00:00',
> 'sink.partition-commit.trigger'='partition-time',
> 'sink.partition-commit.delay'='1 s',
> 'sink.partition-commit.policy.kind'='metastore,success-file'
> );
> SET table.sql-dialect=default;
> SELECT * FROM hive_table;
> {code}
> It fails with:
> {code}
> Caused by: java.lang.NoClassDefFoundError: Could not initialize class
> org.apache.flink.hive.shaded.parquet.format.converter.ParquetMetadataConverter
> at
> org.apache.flink.hive.shaded.formats.parquet.ParquetVectorizedInputFormat.createReader(ParquetVectorizedInputFormat.java:112)
> at
> org.apache.flink.hive.shaded.formats.parquet.ParquetVectorizedInputFormat.createReader(ParquetVectorizedInputFormat.java:73)
> at
> org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter.createReader(HiveBulkFormatAdapter.java:99)
> at
> org.apache.flink.connectors.hive.read.HiveBulkFormatAdapter.createReader(HiveBulkFormatAdapter.java:62)
> at
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.checkSplitOrStartNext(FileSourceSplitReader.java:110)
> at
> org.apache.flink.connector.file.src.impl.FileSourceSplitReader.fetch(FileSourceSplitReader.java:68)
> at
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> at
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:136)
> at
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:100)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> ... 1 more
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)