[ https://issues.apache.org/jira/browse/ARROW-8244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17074114#comment-17074114 ]
Wes McKinney commented on ARROW-8244: ------------------------------------- As long as there's a well-documented way to generate the _metadata file containing all the row group metadata and file paths in a single structure, and then construct a dataset from the _metadata file (avoiding having to parse the metadata from all the constituent files -- which is time consuming), that sounds good to me > [Python][Parquet] Add `write_to_dataset` option to populate the "file_path" > metadata fields > ------------------------------------------------------------------------------------------- > > Key: ARROW-8244 > URL: https://issues.apache.org/jira/browse/ARROW-8244 > Project: Apache Arrow > Issue Type: Wish > Components: Python > Reporter: Rick Zamora > Assignee: Joris Van den Bossche > Priority: Minor > Labels: parquet, pull-request-available > Fix For: 0.17.0 > > Time Spent: 1h > Remaining Estimate: 0h > > Prior to [dask#6023|[https://github.com/dask/dask/pull/6023]], Dask has been > using the `write_to_dataset` API to write partitioned parquet datasets. This > PR is switching to a (hopefully temporary) custom solution, because that API > makes it difficult to populate the the "file_path" column-chunk metadata > fields that are returned within the optional `metadata_collector` kwarg. > Dask needs to set these fields correctly in order to generate a proper global > `"_metadata"` file. > Possible solutions to this problem: > # Optionally populate the file-path fields within `write_to_dataset` > # Always populate the file-path fields within `write_to_dataset` > # Return the file paths for the data written within `write_to_dataset` (up > to the user to manually populate the file-path fields) -- This message was sent by Atlassian Jira (v8.3.4#803005)