vustef commented on issue #7299:
URL: https://github.com/apache/arrow-rs/issues/7299#issuecomment-3437582636

   > Hmm, that's a good point. Maybe we could have it be FieldRef instead of 
Field? Though, that might limit other use cases, so might not be worth it. 
Validation sounds better.
   
   Yeah, for example for Iceberg I think it'd be good to write:
   ```rust
   .with_metadata(std::collections::HashMap::from([( // optional, just an 
example here
       PARQUET_FIELD_ID_META_KEY.to_string(),
       "2147483645",
   )]));
   ```
   
   > I think we should allow sending it in the regular schema. From the point 
of view of the user of the ParquetRecordBatchReader it should look like the 
Parquet file has a physical row number column
   
   I see, that's a decent point. Perhaps then to address all concerns, what is 
confusing is having two different methods. Perhaps we shouldn't have 
`with_metadata_columns` at all, and just require using of `with_schema`? We 
could also have utility function like `append_to_schema`, that would append 
columns to the end of the current schema (whatever they are, metadata columns 
or not).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to