york-yu-ctw opened a new issue #4283: URL: https://github.com/apache/hudi/issues/4283
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? Yes - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** By using hudi 0.10.0, redshift is no longer able to read any data **To Reproduce** Steps to reproduce the behavior: 1. write data to s3 by hudi 0.10.0 2. create redshift spectrum table 3. query this table the spark config ``` df.write .format("hudi") .option("hoodie.datasource.write.table.type", "MERGE_ON_READ") .option("hoodie.datasource.write.partitionpath.field", "dt") .option("hoodie.datasource.write.recordkey.field", "uuid") .option("hoodie.datasource.write.precombine.field", "time") .option("hoodie.datasource.write.hive_style_partitioning", "true") .option("hoodie.datasource.write.operation", "insert") .option("hoodie.compaction.strategy", "org.apache.hudi.table.action.compact.strategy.UnBoundedCompactionStrategy") .option("hoodie.datasource.write.keygenerator.class", "org.apache.hudi.keygen.ComplexAvroKeyGenerator") .option("hoodie.table.name", "table1") .option("hoodie.insert.shuffle.parallelism", "20") .mode("overwrite") .save("s3://xxxxxx/data"); ``` the spectrum table defination ``` CREATE EXTERNAL TABLE spectrum.biz_game_v3_hudi ( _hoodie_commit_time VARCHAR(64), _hoodie_record_key VARCHAR(512), _hoodie_partition_path VARCHAR(128), uuid VARCHAR(45), time VARCHAR(45), ) PARTITIONED BY (dt VARCHAR, appid VARCHAR, region VARCHAR) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'org.apache.hudi.hadoop.HoodieParquetInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' LOCATION 's3://xxxxx/data'; ``` I have noticed the instant timestamp length have changed from `yyyyMMddHHmmss` to `yyyyMMddHHmmssSSS`, once I tried to use 'org.apache.hudi.hadoop.hive.HoodieCombineHiveInputFormat', redshift shows 'hudi::ParsedFilename::IsValidCommitTimestamp( std::string(ctx.hudi_commit_timestamp))' error, I still have no idea why there is nothing return when using `org.apache.hudi.hadoop.HoodieParquetInputFormat` **Expected behavior** Version up hudi should not effect the behavior of reading of lower version **Environment Description** * Hudi version : 0.10.0 * Spark version : 3.1.1 * Hive version : * Hadoop version : 3.2.1 * Storage (HDFS/S3/GCS..) : S3 * Running on Docker? (yes/no) : no **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
