Hi Parquet Devs, Our team is working on userid changing from int to bigint in whole hadoop system. It's easy for us to quick refresh non-partitioned tables, however, more partitioned tables have huge partition files. We are trying to find a quick solution to change data type fast without refreshing partition one by one. That's why I send you this email.
I take a look at your website https://github.com/apache/parquet-format to understand parquet format but I still confused on metadata, so l list following questions: 1. If I want to change one column type, I need to change it in file metadata and column (chunk) metadata, am I right or missing anything? 2. If I change one column type from int32 to int64 in file metadata and column (chunk) metadata directly, can compressed data be read correctly? If not, what's problem? Thank you so much for your time and we would be appreciated if you could reply. Best Regards, Ronnie
