openinx commented on pull request #3810:
URL: https://github.com/apache/iceberg/pull/3810#issuecomment-1061342640


   Let's make this more clear: 
   
   In the ORC class, we just copied all key values from property maps to hadoop 
configuration. And all the following configure keys will be parsed from hadoop 
configuration instance. 
   
   In the current Parquet class, we will just keep all the property maps into 
in-memory hash map, and then the `dataContext` & `deleteContext` will parse the 
hash map directly.  Finally,  we will copy the key values from HashMap to 
hadoop configuration.
   
   In my mind, I just don't see the difference between the two approach. Both 
look good to me.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to