codope opened a new pull request #3917:
URL: https://github.com/apache/hudi/pull/3917
## What is the purpose of the pull request
`spark.sql.parquet.writeLegacyFormat` was harcoded to false in
`HoodieRowParquetWriteSupport`. In some cases, users need to set it to true.
From a user on Slack:
```
Reason to use this config:
Current Bulk insert use spark dataframe writer and don't do avro conversion.
The decimal columns in my DF are written as INT32 type in parquet.
The upsert functionality which uses avro conversion is generating Fixed
Length byte array for decimal types which is failing with datatype mismatch.
```
## Brief change log
- Make `spark.sql.parquet.writeLegacyFormat` configurable.
## Verify this pull request
*(Please pick either of the following options)*
This pull request is a trivial rework / code cleanup without any test
coverage.
*(or)*
This pull request is already covered by existing tests, such as *(please
describe tests)*.
(or)
This change added tests and can be verified as follows:
*(example:)*
- *Added integration tests for end-to-end.*
- *Added HoodieClientWriteTest to verify the change.*
- *Manually verified the change by running a job locally.*
## Committer checklist
- [ ] Has a corresponding JIRA in PR title & commit
- [ ] Commit message is descriptive of the change
- [ ] CI is green
- [ ] Necessary doc changes done or have another open PR
- [ ] For large changes, please consider breaking it into sub-tasks under
an umbrella JIRA.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]