[ 
https://issues.apache.org/jira/browse/ARROW-8244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17068794#comment-17068794
 ] 

Joris Van den Bossche commented on ARROW-8244:
----------------------------------------------

Thanks for opening the issue [~rjzamora]

Agreed this is a problem, and I think we should at least also return the path 
(so it can be fixed afterwards), or otherwise set it ourselves (optionally).

Regarding those different options: starting to also return the path together 
with the metadata is not really backwards compatible, so we would need to add 
additional keyword like `path_collector` in addition to `metadata_collector`. 

For simply always populating the file path, that might depend on whether there 
are other use cases for collecting this metadata (although I assume dask is the 
main user of this keyword).   
A github search turned up dask, cudf and spatialpandas as users of the 
`metadata_collector` keyword. I assume `cudf` needs the same fix as dask. I 
didn't check yet how it's used in spatialpandas.

I suppose optionally populating it is the safest, I am only doubtful that 
having it optional behind a new keyword is actually useful (whether there are 
use cases for not wanting to populate it).

> [Python][Parquet] Add `write_to_dataset` option to populate the "file_path" 
> metadata fields
> -------------------------------------------------------------------------------------------
>
>                 Key: ARROW-8244
>                 URL: https://issues.apache.org/jira/browse/ARROW-8244
>             Project: Apache Arrow
>          Issue Type: Wish
>          Components: Python
>            Reporter: Rick Zamora
>            Priority: Minor
>              Labels: parquet
>             Fix For: 0.17.0
>
>
> Prior to [dask#6023|[https://github.com/dask/dask/pull/6023]], Dask has been 
> using the `write_to_dataset` API to write partitioned parquet datasets.  This 
> PR is switching to a (hopefully temporary) custom solution, because that API 
> makes it difficult to populate the the "file_path"  column-chunk metadata 
> fields that are returned within the optional `metadata_collector` kwarg.  
> Dask needs to set these fields correctly in order to generate a proper global 
> `"_metadata"` file.
> Possible solutions to this problem:
>  # Optionally populate the file-path fields within `write_to_dataset`
>  # Always populate the file-path fields within `write_to_dataset`
>  # Return the file paths for the data written within `write_to_dataset` (up 
> to the user to manually populate the file-path fields)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to