bradleybonitatibus opened a new issue, #9890: URL: https://github.com/apache/hudi/issues/9890
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** A clear and concise description of the problem. We have a hudi table with the following relevant `hudi.properties`: ``` hoodie.table.precombine.field=updated_at hoodie.datasource.write.drop.partition.columns=false hoodie.table.partition.fields=inserted_at hoodie.table.type=COPY_ON_WRITE hoodie.archivelog.folder=archived hoodie.table.cdc.enabled=false hoodie.populate.meta.fields=true hoodie.partition.metafile.use.base.format=false hoodie.table.version=5 hoodie.timeline.layout.version=1 hoodie.table.base.file.format=PARQUET hoodie.table.recordkey.fields=id hoodie.table.keygenerator.class=org.apache.hudi.keygen.TimestampBasedKeyGenerator ``` It may be worth noting that our deltastreamer is passed: ``` hoodie.deltastreamer.keygen.timebased.output.dateformat='yyyy/MM/dd' ``` as configuration. and the `inserted_at` column is a `LongType`. The avro schema from an example parquet file is below: ```json {"name":"inserted_at","type":["null","long"],"default":null} ``` When reading this table using the following `pyspark` code, the `inserted_at` column, even when `inferSchema:false` and the schema is explicitly set when loading the pyspark dataframe, displays the hudi output format of `yyyy/MM/dd` instead of the `LongType`, even when casting. ```python3 base_path = f"s3a://{bucket_name}/{table_prefix}" df = ( spark_session.read.options(**hudi_opts) .format("hudi") .load( base_path, schema=T.StructType( fields=[ T.StructField("inserted_at", T.LongType(), nullable=True), # other columns ] ), ) ) df.show() ``` **To Reproduce** Steps to reproduce the behavior: 1. write hudi table with similar configuration 2. read from table pyspark **Expected behavior** A clear and concise description of what you expected to happen. - I'd expect to be able to read the `LongType` value correctly **Environment Description** * Hudi version : 0.13 * Spark version : 3.1 * Hive version : * Hadoop version : * Storage (HDFS/S3/GCS..) : S3 * Running on Docker? (yes/no) : No **Additional context** Add any other context about the problem here. **Stacktrace** ```Add the stacktrace of the error.``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
