sunmingqiaa opened a new issue, #9766: URL: https://github.com/apache/hudi/issues/9766
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** A clear and concise description of the problem. **To Reproduce** Steps to reproduce the behavior: 1. 2. 3. 4. **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version :1.13.0 * Spark version : * Hive version : 2.1.1-cdh6.3.2 * Hadoop version : 3.0.0-cdh6.3.2 * Storage (HDFS/S3/GCS..) :hdfs * Running on Docker? (yes/no) :no **Additional context** 在https://hudi.apache.org/docs/flink-quick-start-guide#insert-data的测试用例中,给hudi表增加了同步到hive的配置 在flink sql-client查询的timestamp类型字段值是'1970-01-01 00:00:01'这种格式的,但是在hive中查询的该值是bigint类型的数值。 如何才能保持同步到HIve的该类型数据格式保持一致? CREATE TABLE t1( uuid VARCHAR(20), name VARCHAR(10), age INT, ts TIMESTAMP(3), `partition` VARCHAR(20) ) PARTITIONED BY (`partition`) WITH ( 'connector' = 'hudi', 'path' = '/tmp/hudi/t1', 'table.type' = 'COPY_ON_WRITE', -- If MERGE_ON_READ, hive query will not have output until the parquet file is generated 'hive_sync.enable' = 'true', -- Required. To enable hive synchronization 'hive_sync.mode' = 'hms', -- Required. Setting hive sync mode to hms, default jdbc 'hive_sync.metastore.uris'= 'thrift://syq-121:9083', 'hive_sync.jdbc_url'= 'jdbc:hive2://syq-121:10000', 'hive_sync.table'= 'test_hudi', 'hive_sync.support_timestamp'= 'true', 'hive_sync.db'= 'default' ); **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
