[
https://issues.apache.org/jira/browse/ARROW-5349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16841768#comment-16841768
]
Wes McKinney commented on ARROW-5349:
-------------------------------------
There is an API available to set the file path
https://github.com/apache/arrow/blob/master/cpp/src/parquet/metadata.h#L218
I would recommend adding a method on {{parquet::ParquetFileWriter}} that will
set this attribute to a particular value for all column chunk metadata created.
This method can be exposed then in Python
> [Python/C++] Provide a way to specify the file path in parquet
> ColumnChunkMetaData
> ----------------------------------------------------------------------------------
>
> Key: ARROW-5349
> URL: https://issues.apache.org/jira/browse/ARROW-5349
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++, Python
> Reporter: Joris Van den Bossche
> Priority: Major
> Labels: parquet
> Fix For: 0.14.0
>
>
> After ARROW-5258 / https://github.com/apache/arrow/pull/4236 it is now
> possible to collect the file metadata while writing different files (then how
> to write those metadata was not yet addressed -> original issue ARROW-1983).
> However, currently, the {{file_path}} information in the ColumnChunkMetaData
> object is not set. This is, I think, expected / correct for the metadata as
> included within the single file; but for using the metadata in the combined
> dataset `_metadata`, it needs a file path set.
> So if you want to use this metadata for a partitioned dataset, there needs to
> be a way to specify this file path.
> Ideas I am thinking of currently: either, we could specify a file path to be
> used when writing, or expose the `set_file_path` method on the Python side so
> you can create an updated version of the metadata after collecting it.
> cc [~pearu] [~mdurant]
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)