putaozhi123 opened a new issue #2869:
URL: https://github.com/apache/hudi/issues/2869
The Hive tables created are as follows after hudi sync : CREATE EXTERNAL
TABLE IF NOT EXISTS `tmp`.`hud_test`( `_hoodie_commit_time` string,
`_hoodie_commit_seqno` string, `_hoodie_record_key` string,
`_hoodie_partition_path` string, `_hoodie_file_name` string, `name` string,
`age` int, `address` string, `score` DECIMAL(10 , 4), `ts` TIMESTAMP)
PARTITIONED BY (`default` String) ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS
INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat' OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' LOCATION
'/tmp/hudi_cow/xx'
When I run the Spark job to synchronize the Hudi table into Hive, I query
Hudi table from Hive,there are the following errors:
**Failed with exception
java.io.IOException:org.apache.hadoop.hive.ql.metadata.HiveException:
java.lang.ClassCastException: org.apache.hadoop.io.LongWritable cannot be cast
to org.apache.hadoop.hive.serde2.io.TimestampWritable**
**Environment Description**
* Hudi version : 0.8.0
* Spark version :2.4.4
* Hive version :2.3.8
* Hadoop version :2.10.1
* Storage (HDFS/S3/GCS..) :hdfs
* Running on Docker? (yes/no) :no
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]