Zouxxyy opened a new pull request, #9798:
URL: https://github.com/apache/hudi/pull/9798
### Change Logs
Steps to reproduce
```sql
-- spark
set hoodie.schema.on.read.enable=true;
create table hudi_cow_test_tbl (
id bigint,
name string,
ts int,
dt string,
hh string
) using hudi
tblproperties (
type = 'cow',
primaryKey = 'id',
preCombineField = 'ts'
)
partitioned by (dt, hh);
insert into hudi_cow_test_tbl values (1, 'a1', 1001, '2021-12-09', '10');
ALTER TABLE hudi_cow_test_tbl ALTER COLUMN ts TYPE bigint;
insert into hudi_cow_test_tbl values (2, 'a2', 1001, '2021-12-09', '11');
-- hive
select id, dt from xinyu_test.hudi_cow_test_tbl;
Failed with exception
java.io.IOException:org.apache.hudi.exception.HoodieException:
The size of hive.io.file.readcolumn.ids: 5,8,9 is not equal to projection
columns: id
```
The core reason is we need to modify the conf first (by
https://github.com/apache/hudi/pull/7355) and then execute `new
SchemaEvolutionContext(split, job).doEvolutionForParquetFormat();`
### Impact
Fix above
### Risk level (write none, low medium or high below)
low
### Documentation Update
none
### Contributor's checklist
- [ ] Read through [contributor's
guide](https://hudi.apache.org/contribute/how-to-contribute)
- [ ] Change Logs and Impact were stated clearly
- [ ] Adequate tests were added if applicable
- [ ] CI passed
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]