[
https://issues.apache.org/jira/browse/ARROW-1983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16849106#comment-16849106
]
Wes McKinney commented on ARROW-1983:
-------------------------------------
Probably the most flexible thing for writing would be a function that appends a
metadata to an OutputStream
{code:java}
Status AppendFileMetaData(const FileMetaData& metadata,
arrow::io::OutputStream* out);
{code}
Correspondingly, please also write a function that parses a multiple-metadata
file like
{code}
Status ParseMetaDataFile(arrow::io::InputStream* input,
std::vector<std::shared_ptr<FileMetaData>>* out);
{code}
Lastly, AFAIK we are not able to instantiate {{ParquetFileReader}} given
previously-read {{FileMetaData}} -- probably can do that in a separate JIRA
> [Python] Add ability to write parquet `_metadata` file
> ------------------------------------------------------
>
> Key: ARROW-1983
> URL: https://issues.apache.org/jira/browse/ARROW-1983
> Project: Apache Arrow
> Issue Type: Improvement
> Components: C++, Python
> Reporter: Jim Crist
> Priority: Major
> Labels: beginner, parquet, pull-request-available
> Fix For: 0.14.0
>
> Time Spent: 6h 20m
> Remaining Estimate: 0h
>
> Currently {{pyarrow.parquet}} can only write the {{_common_metadata}} file
> (mostly just schema information). It would be useful to add the ability to
> write a {{_metadata}} file as well. This should include information about
> each row group in the dataset, including summary statistics. Having this
> summary file would allow filtering of row groups without needing to access
> each file beforehand.
> This would require that the user is able to get the written RowGroups out of
> a {{pyarrow.parquet.write_table}} call and then give these objects as a list
> to new function that then passes them on as C++ objects to {{parquet-cpp}}
> that generates the respective {{_metadata}} file.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)